Nov 29, 2015
6:06:15pm
capo All-American
ok, so it seems easy using vagrant to do this, except one tiny problem
I run the following two lines:

export KUBERNETES_PROVIDER=vagrant
curl -sS https://get.k8s.io | bash

IT LOOKS TO START OUT GREAT.... (IF ITS TOO LONG DON"T Worry about reading it. I am working on figuring it out. but anybody gets bored this is the FULL error.)

lds$ curl -sS https://get.k8s.io | bash
Downloading kubernetes release v1.1.2 to /Users/lds/kubernetes.tar.gz
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 182M 100 182M 0 0 7542k 0 0:00:24 0:00:24 --:--:-- 6907k
Unpacking kubernetes release v1.1.2
Creating a kubernetes on vagrant...
... Starting cluster using provider: vagrant
... calling verify-prereqs
... calling kube-up
Bringing machine 'master' up with 'virtualbox' provider...
Bringing machine 'minion-1' up with 'virtualbox' provider...
==> master: Importing base box 'kube-fedora21'...
==> master: Matching MAC address for NAT networking...
==> master: Setting the name of the VM: kubernetes_master_1448842465673_61339
==> master: Clearing any previously set network interfaces...
==> master: Preparing network interfaces based on configuration...
master: Adapter 1: nat
master: Adapter 2: hostonly
==> master: Forwarding ports...
master: 22 => 2222 (adapter 1)
==> master: Running 'pre-boot' VM customizations...
==> master: Booting VM...
==> master: Waiting for machine to boot. This may take a few minutes...
master: SSH address: 127.0.0.1:2222
master: SSH username: vagrant
master: SSH auth method: private key
master: Warning: Connection timeout. Retrying...
master:
master: Vagrant insecure key detected. Vagrant will automatically replace
master: this with a newly generated keypair for better security.
master:
master: Inserting generated public key within guest...
master: Removing insecure key from the guest if it's present...
master: Key inserted! Disconnecting and reconnecting using new SSH key...
==> master: Machine booted and ready!
==> master: Checking for guest additions in VM...
==> master: Configuring and enabling network interfaces...
==> master: Mounting shared folders...
master: /vagrant => /Users/lds/kubernetes
==> master: Running provisioner: shell...
master: Running: /var/folders/kr/zmshxsvj38x508v9whbnd93r0000gp/T/vagrant-shell20151129-2074-190c30s.sh
==> master: Adding kubernetes-minion-1 to hosts file
==> master: Provisioning network on master




and this is the error I get.




master:
==> master:
==> master: One of the configured repositories failed (Fedora 21 - x86_64),
==> master: and yum doesn't have enough cached data to continue. At this point the only
==> master: safe thing yum can do is fail. There are a few ways to work "fix" this:
==> master:
==> master: 1. Contact the upstream for the repository and get them to fix the problem.
==> master:
==> master: 2. Reconfigure the baseurl/etc. for the repository, to point to a working
==> master: upstream. This is most often useful if you are using a newer
==> master: distribution release than is supported by the repository (and the
==> master: packages for the previous distribution release still work).
==> master:
==> master: 3. Disable the repository, so yum won't use it by default. Yum will then
==> master: just ignore the repository until you permanently enable it again or use
==> master: --enablerepo for temporary usage:
==> master:
==> master: yum-config-manager --disable fedora
==> master:
==> master: 4. Configure the failing repository to be skipped, if it is unavailable.
==> master: Note that yum will try to contact the repo. when it runs most commands,
==> master: so will have to try and fail each time (and thus. yum will be be much
==> master: slower). If it is a very temporary problem though, this is often a nice
==> master: compromise:
==> master:
==> master: yum-config-manager --save --setopt=fedora.skip_if_unavailable=true
==> master:
==> master: Cannot retrieve metalink for repository: fedora/21/x86_64. Please verify its path and try again
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
capo
Previous username
mike2744
Bio page
capo
Joined
Mar 14, 2005
Last login
Oct 6, 2016
Total posts
15,084 (932 FO)
Messages
Author
Time

Posting on CougarBoard

In order to post, you will need to either sign up or log in.