Scaling on demand

30 November, 2021

Sebastian Saemann
Sebastian Saemann
CEO Managed Services

Sebastian kam von einem großen deutschen Hostingprovider zu NETWAYS, weil ihm dort zu langweilig war. Bei uns kann er sich nun besser verwirklichen, denn er leitet das Managed Services Team. Wenn er nicht gerade Cloud- Komponenten patched, versucht er mit seinem Motorrad einen neuen Rundenrekord aufzustellen.

by | Nov 30, 2021

Depending on the type of in-house production, it may be worthwhile for some server operators to automatically create virtual machines for a certain period of time using a script and – once the work is done – to delete them again just as automatically; for example, if a computing job with your own hardware would take longer than is acceptable. Our cloud will be happy to take care of this for you – even if it involves resources other than processors.

In this example, I will go through the first steps of this scenario, how to talk against the API of our cloud platform using Linux CLI. This requires an OpenStack client on the host operating the scaling. Under Ubuntu, for example, this would be the python-openstackclient package. Next, the project-related “OpenStack RC File v3” from the OpenStack WebUI is required. This file can be downloaded after logging into the project via the drop-down menu with your own project ID at the top right. You source the file so that the client also knows which project it should talk to via the API – requires password entry:

source XXXX-openstack-XXXXX-openrc.sh

In order to be able to set the options to be transferred for the start of a new instance, you can now search for values (UUID; except for the key pair) for these, decide on the correct ones and memorize them:

  • Source, the installation image to be used: openstack image list
  • Flavor, i.e. what dimensions the VM to be built should have: openstack flavor list
  • Networks, here I recommend the project’s own, externally secured subnet: openstack network list
  • Security groups, at least the default security group is recommended here so that all VMs can talk to each other fully, at least within the project: openstack security group list
  • Key Pair, for connecting via SSH: openstack keypair list

The instance can then be started – if there is more than one value to be transferred per option, list the option several times with one value each time, with the instance or server name last:

openstack server create --image $imID --flavor $flID --network $nID --security-group $sgID --key-name $Name $Servername

Sodala, the VM is up and running and ready to make its contribution to day-to-day business. Anyone who would like to have more than one machine, z. e.g. three,also enter these options before the server name:

--min 3 --max 3

To save money, the servers can also be deleted once the work is done:

openstack server list
openstack server delete $deID

Automatically, i.e. without looking for the ID of the instance, this would also work:

deID=`openstack server list | grep Instanzname | cut -d ' ' -f 2` ; openstack server delete $deID

As already mentioned, integrating the create, compute and delete commands into a script is a good idea. If your own bash skills are not sufficient for this, you can turn to our MyEngineer®. Interposing a load balancer is also no problem here.

Our portfolio

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

How did you like our article?