Scaling on Demand

by | Dec 18, 2019

Depending on the type of production, it may be worthwhile for some server operators to automatically create virtual machines for a certain period of time and – after the work is done – to delete them again just as automatically; for example, if a computing job with their own hardware would take longer than is acceptable. Our cloud is happy to take care of this for you – even if it involves resources other than processors.

In this example, I will go over the first steps regarding this scenario, how to speak against the API of our OpenStack platform using Linux CLI.
To do this, you need an OpenStack client on the host running the scaling. On Ubuntu, for example, this would be the package python-openstackclient .
Next, the project-specific “OpenStack RC File v3” from the OpenStack WebUI is necessary. This file can be downloaded after logging into the project via the drop-down menu with one’s own project ID at the top right.

You source the file, therefore the client also knows which project to talk to against the API – requires password entry:
source XXXX-openstack-XXXXX-openrc.sh

In order to be able to set the options to be passed on for the start of a new instance, it is now possible to search for values (UUID; except for the key pair) for these, decide on the correct ones and note them:

  • Source, the installation image to be used:
        openstack image list
  • Flavor, i.e. what dimensions the VM to be built should have:
        openstack flavor list
  • Networks, here I recommend the project’s own subnet, which is secured from the outside.:
        openstack network list
  • Security Groups, at least the default security group is recommended here, therefore all VMs can talk to each other fully, at least within the project.:
        openstack security group list
  • Key Pair, to connect via SSH:
        openstack keypair list

Then the instance can already be started – if there is more than one value to be passed per option, list the option several times with one value each, the instance or server name last:
    openstack server create --image $imID --flavor $flID --network $nID --security-group $sgID --key-name $Name $Servername

Ta-da, the VM is ready and willing to make its contribution to the day-to-day business.
If you would like to have more than one machine, z. B. three , additionally give these options in front of the server name:
    --min 3 --max 3

To save your wallet, the servers can be deleted after the work is done:
    openstack server list
    openstack server delete $deID

This could also be done automatically, i.e. without looking for the ID of the instance:
    deID=`openstack server list | grep Instanzname | cut -d ' ' -f 2` ; openstack server delete $deID

As I said, it is a good idea to include the Create, Compute and Delete commands in a script. If your own bash skills are not enough for you, you are welcome to contact our MyEngineers . The interposition of a load balancer is also not a problem here.

More about Cloud
Creating and Using S3 Object Storage

Creating and Using S3 Object Storage

In times of high availability and multiple web servers, the balancing act between central data storage, data security and fast access times must somehow be achieved. This is precisely why more and more users are now using technologies that entice them with buzzwords...

Corosync Cluster with Failover IP

Corosync Cluster with Failover IP

One of the first customer requirements you usually read is High availability. For a long time now, it has been more the norm that the project is still accessible without problems even in the event of partial failures and that "single points of failure" are avoided. A...

Backup and Snapshot

Backup and Snapshot

Sooner or later, everyone who has a server running will reach the point where the VM (or parts of it) irreversibly ” breaks” – by whatever means. Those who have dedicated themselves to backing up their data in advance are now clearly at an advantage and can expect...