mistio/mist-ce: Mist is an open source, multicloud management platform

Mist Cloud Management Platform - Community Edition

Mist is an open source platform for managing heterogeneous computing infrastructure, aka a Multicloud Management Platform.

The managed computing resources may be running on any combination of public clouds, private clouds, hypervisors, bare metal servers, container hosts.

Mist is developed by Mist.io Inc. The code for the Community Edition is provided under the Apache License v2. The Enterprise Edition and the Hosted Service include plugins for Governance, Role Based Access Control & Cost Insights. They are available for purchase at https://mist.io. Paid support plans are available for any edition.

Table of Contents

Who needs Mist?

Organizations that depend on hybrid or multi-cloud infrastructure Organizations that provide computing resources to their users on a self-service fashion

They often end up building silos of distinct tools, processes & teams for each supported platform, introducing operational complexities which can affect both security and efficiency.

As the heterogeneity increases, it's becoming increasingly difficult to

train users

set access control rules

set governance policies like quotas and other constraints

audit usage

monitor/optimize costs

automate complex deployments

set up metering & billing

Mist provides a unified way to operate, monitor & govern these resources. The mission statement of the Mist platform is to help commoditize computing by alleviating vendor lock-in.

Features

Instant visibility of all the available resources across clouds, grouped by tags

Instant reporting/estimation of the current infrastructure costs

Compare current & past costs, correlate with usage, provide right-sizing recommendations (EE/HS only)

Provision new resources on any cloud: machines, volumes, networks, zones, records

Perform life cycle actions on existing resources: stop, start, reboot, resize, destroy, etc

Instant audit logging for all actions performed through Mist or detected through continuous polling

Upload scripts to the library, run them on any machine while enforcing audit logging and centralized control of SSH keys

SSH command shell on any machine within the browser or through the CLI, enforcing audit logging and centralized control of SSH keys

Enable monitoring on target machines to display real time system & custom metrics and store them for long term access

Set rules on metrics or logs that trigger notifications, webhooks, scripts or machine lifecycle actions

Set schedules that trigger scripts or machine lifecycle actions

Set fine grained access control policies per team/tag/resource/action (EE/HS only)

Set governance constraints: e.g. quotas on cost per user/team, required expiration dates (EE/HS only)

Upload infrastructure templates that may describe complex deployments and workflows (EE/HS only)

Deploy and scale Kubernetes clusters on any supported cloud (EE/HS only)

Terminology

Cloud

Any service that provides on-demand access to computing resources

Public clouds (e.g. AWS, Azure, Google Cloud, IBM Cloud, DigitalOcean, Linode, Packet)

Private clouds (e.g. based on OpenStack, vSphere, OnApp)

Hypervisors (e.g. KVM, ESXi),

Container hosts / Container clusters

Bare metal / Other server

Machine

Any computing resource is a machine. There are many types of machines and some machines may contain other machines.

Volume

Any physical or virtual data storage device. E.g. Physical HDD/SSD, Cloud disks, EBS volumes, etc. Volumes may be attached on machines. May be provisioned along with machines or independently.

Network

Private network spaces that machines can join. e.g. AWS VPC's

Script

An executable (e.g. bash script) or an Ansible playbook that can run on machines over SSH. Scripts may be added inline or by a reference to a tarball or a Git repository.

Template

A blueprint that describes the full lifecycle of an application that may require multiple computing resources, network, storage and additional configurations. E.g. The provided Kubernetes template enables the deployment of a Kubernetes cluster on any cloud and provides workflows to easily scale the cluster up or down. Currently supporting Cloudify blueprints. Terraform support coming soon.

Stack

The deployment of a template is a Stack. A Stack may include resources (e.g. machines, networks, volumes) and provides a set of workflow actions that can be performed. A Stack created by the Kubernetes template refers to a Kubernetes cluster. It includes references to the master and worker nodes and provides scale up & down workflows that can be applied to the cluster.

Tunnel

A point to point VPN enabling Mist to manage infrastructure that's not on publicly addressable network space.

Architecture

Mist is a cloud native application split into microservices which are packaged as Docker containers. It can be deployed on a single host with Docker Compose, or on a Kubernetes cluster using Helm.

The most notable components are the following:

Mist UI, a web application built with Web Components and Polymer

REST API that serves requests from clients

WebSocket API, sends real-time updates to connected clients and proxies shell connections

Hubshell service, opens SSH connections to machines or shell connections using the Docker API

Dramatiq workers, running asynchronous jobs

APScheduler based scheduler that schedules polling tasks, rule checks, as well as user defined scheduled actions

Gocky as the relay to receive and pre-process monitoring metrics

RabbitMQ message queue service

InfluxDB, or VictoriaMetrics as a time series database

MongoDB or FoundationDB Document Layer as the main database

Elasticsearch for storing and searching logs

Logstash for routing logs to Elasticsearch

Telegraf as a data collection agent, installed on monitored machines

The user interacts with the RESTful Mist API through client apps like the Mist UI in the browser, or command line tools (e.g. cURL, Mist CLI). The Mist UI, apart from invoking the RESTful API, also establishes a WebSocket connection, which is used to receive real time updates and to proxy shell connections to machines. The Mist API server interacts with the respective API's of the target clouds, either directly, or by adding tasks that get executed asynchronously by Celery workers. The messaging is following the AMQP protocol and gets coordinated by RabbitMQ. The main data store is MongoDB. Logs are being stored in Elasticsearch. Time series data go to either VictoriaMetrics or InfluxDB, depending on the installation. Rule checks, polling tasks & user tasks are triggered by the scheduler service. Whenever a shell connection is required (e.g. SSH or Docker Shell), Sheller establishes the connection and makes it available through the WebSocket API.

Kubernetes cluster

Add the mist chart repository and fetch available charts

helm repo add mist helm repo update

For Mist to function correctly, you should set the http.host parameter to specify the FQDN of the installation.

helm install mist-ce mist/mist-ce --set

The above command set the FQDN to and additionaly creates an administrator account with email address and Organization name .

Configuration

In order to easily customize all available options:

Export default chart values

helm show values mist/mist-ce > values.yaml

Edit values.yaml according to your needs Install or upgrade release

helm upgrade --install mist-ce mist/mist-ce -f values.yaml

TLS

If you have configured a TLS certificate for this hostname as a k8s secret you can configure it using the option

helm install mist-ce mist/mist-ce --set --set

If you want to issue a new certificate, also configure the cluster issuer that will be used

helm install mist-ce mist/mist-ce --set --set --set

External dockerhost

In order for orchestration plugin to work Mist needs to deploy Docker containers.

By default an in-cluster dockerhost pod in privileged mode is deployed.

To use an external dockerhost set the following values:

helm install mist-ce mist/mist-ce --set docker.host= < dockerIP > ,docker.port= < dockerPort > ,docker.key= < TLSKey > < TLSCert > < TLSCACert >

The following table lists the configurable parameters of the Mist chart and their default values.

Parameter Description Default http.host FQDN or IP of Mist installation localhost http.http2 Use HTTP/2 false The Kubernetes Secret containing the tls.key data '' Array of TLS hosts for ingress record [] {} The TLS clusterIssuer '' smtp.host SMTP mail server address '' smtp.port The SMTP port 8025 smtp.username SMTP username '' smtp.password SMTP password '' Use TLS with SMTP false smtp.starttls If true, will send the starttls command (typically not used with false Whether to create a new admin user on first Chart installation true The organization name The admin's email address portalAdmin.password The admin's password If true, an API token will also be created true docker.deploy Deploy a dockerhost pod in-cluster (The pod will run in privileged mode) true docker.host External dockerhost address '' docker.port External dockerhost port 2375 docker.key The external dockerhost SSL private key '' The external dockerhost SSL certificate '' The external dockerhost CA certificate '' vault.address Vault address http://vault:8200 Authentication token for Vault '' The Vault RoleID '' vault.secretId The Vault SecretID '' vault.secret_engine_path {} The default Vault path for Cloud credentials mist/clouds/ vault.keys_path The default Vault path for Key credentials mist/keys elasticsearch.host The ElasticSearch host '' elasticsearch.port The ElasticSearch port 9200 elasticsearch.username Username for ElasticSearch with basic auth '' elasticsearch.password Password for ElasticSearch with basic auth '' Connect to ElasticSearch using TLS false elasticsearch.verifyCerts Whether or not to verify TLS false influxdb.host The InfluxDB host '' influxdb.port Whether or not to verify TLS 8086 influxdb.db The InfluxDB database to use telegraf true true victoriametrics.deploy Deploy a Victoria Metrics cluster true External Victoria Metrics cluster read endpoint '' External Victoria Metrics cluster write endpoint '' rabbitmq.deploy Deploy a RabbitMQ cluster true Number of RabbitMQ replicas to deploy 1 Default replication factor for queues 1 rabbitmq.auth.username RabbitMQ username guest rabbitmq.auth.password RabbitMQ password guest Erlang cookie to determine whether nodes are allowed to communicate with each other guest rabbitmqExternal.host External RabbitMQ address (Only used when rabbitmq.deploy is false) '' rabbitmqExternal.port External RabbitMQ port 5672 rabbitmqExternal.username External RabbitMQ username guest rabbitmqExternal.password External RabbitMQ password guest mongodb.deploy Deploy a MongoDB cluster true mongodb.host External MongoDB address (Only used when mongodb.deploy is false) '' mongodb.port External MongoDB port 27017 memcached.host Memcached host in the format: {host}:{port} '' monitoring.defaultMethod Available options: "telegraf-victoriametrics", "telegraf-influxdb" telegraf-influxdb Allow signups with email/password false Allow signins with email/password true auth.google.signup Allow signups with Google oAuth false auth.google.signin Allow signins with Google oAuth false auth.google.key The Client ID for Google oAuth '' auth.google.secret The Client Secret for Google oAuth '' auth.github.signup Allow signups with Github oAuth false auth.github.signin Allow signins with Github oAuth false auth.github.key The Client ID for Github oAuth '' auth.github.secret The Client Secret for Github oAuth '' backup.key The AWS Key '' backup.secret The AWS Secret '' backup.bucket The S3 Bucket name used to store backups '' The region where the S3 bucket is located '' The email recipient of the encrypted backup '' backup.gpg.public The GPG public key '' githubBotToken '' Replicas for gocky deployment 1 Replicas for api deployment 2 Replicas for sockjs deployment 1 Replicas for ui deployment 1 Replicas for nginx deployment 1 Replicas for landing deployment 1 Enable dramatiq consumers for all queues true 2 Enable dramatiq consumers for "default" queue false 1 Enable dramatiq consumers for "dramatiq_provisioning" queue false 1 Enable dramatiq consumers for "dramatiq_polling" queue false 1 Enable dramatiq consumers for "dramatiq_machines" queue false 1 Enable dramatiq consumers for "dramatiq_clusters" queue false 1 Enable dramatiq consumers for "dramatiq_networks" queue false 1 Enable dramatiq consumers for "dramatiq_zones" queue false 1 Enable dramatiq consumers for "dramatiq_volumes" queue false 1 Enable dramatiq consumers for "dramatiq_buckets" queue false 1 Enable dramatiq consumers for "dramatiq_mappings", "dramatiq_sessions" queues false 1 Enable dramatiq consumers for "dramatiq_scripts" queue false 1 Enable dramatiq consumers for "dramatiq_ssh_probe" queue false 1 Enable dramatiq consumers for "dramatiq_ping_probe" queue false 1 Enable dramatiq consumers for "dramatiq_rules" queue false 1 Enable dramatiq consumers for "dramatiq_schedules" queue false 1 Enable scheduler for all polling schedules true 1 Enable scheduler for "builtin" schedules false 1 Enable scheduler for "user" schedules false 1 Enable scheduler for "polling" schedules false 1 Enable scheduler for "rules" schedules false 1

Single host

The easiest way to get started with Mist is to install the latest release using docker-compose . So, in order to run it, one needs to install a recent version of docker and docker-compose.

To install the latest stable release, head over to releases and follow the instructions there.

After a few minutes (depending on your connection) all the mist containers will be downloaded and started in the background.

Run docker-compose ps . All containers should be in the UP state, except shortlived container elasticsearch-manage.

Linode users can quickly set up Mist through Linode's One-Click App Marketplace. You can find Mist here and a video about how it works here.

Hardware requirements

We recommended setting up Mist in a machine with 4 CPU cores, 8GB RAM and 10GB disk (accessible to /var/lib/docker/).

Running Mist

Make sure you're inside the directory containing the docker-compose.yml file.

Switch to the directory containing the docker-compose.yml file and run

docker-compose up -d

This will start all the mist docker containers in the background.

To create a user for the first time, first run

docker-compose exec api sh

This should drop you in a shell into one of the mist containers. In there, run

./bin/adduser --admin

Replace the email address with yours. Try running ./bin/adduser -h for more options. The --docker-cloud flag will add the docker daemon hosting the mist installation as a docker cloud in the created account.

Mist binds on port 80 of the host. Visit http://localhost and login with the email and password specified above.

Welcome to Mist! Enjoy!

Configuring

After the initial docker-compose up -d , you'll see that a configuration file is created in ./settings/settings.py . Edit this file to modify configuration. Any changes to the ./settings/settings.py require a restart to take effect:

docker-compose restart

Required configuration

URL

If running on anything other than localhost , you'll need to set the CORE_URI setting in ./settings/settings.py . Example:

CORE_URI = "http://198.51.100.12"

Mail settings

In some cases, such as user registration, forgotten passwords, user invitations etc, mist needs to send emails. By default, mist is configured to use a mock mailer. To see logs sent by mist, run

docker-compose logs -f mailmock

If you wish to use a real SMTP server, edit ./settings/settings.py and modify MAILER_SETTINGS .

Don't forget to restart docker-compose for changes to take effect.

TLS settings

This section applies if you've installed mist by using the docker-compose.yml file of a mist release.

Assuming a certificate cert.pem and private key file key.pem in the same directory as the docker-compose.yml file:

Create a file with the following contents:

version : ' 2.0 ' services : nginx : volumes : - - ./cert.pem:/etc/nginx/cert.pem:ro - ./key.pem:/etc/nginx/key.pem:ro ports : - 443:443

Create a in the directory of docker-compose.yml , with the following contents:

listen 80; listen 443 ssl; server_name ssl_certificate /etc/nginx/cert.pem; ssl_certificate_key /etc/nginx/key.pem; if ($scheme != "https") { rewrite ^ https://$host$uri permanent; }

Update CORE_URI in mist's settings (see URL section above).

Run docker-compose up -d .

Managing Mist

Mist is managed using docker-compose . Look that up for details. Some useful commands follow. Keep in mind that you need to run these from inside the directory containing the docker-compose.yml file:

# See status of all applications docker-compose ps # Almost all containers should be in the UP state. An exception to this # is shortlived containers. Currently the only such container is # elasticsearch-manage. This should run for a few seconds and exit 0 if # everything went fine. # Restart nginx container docker-compose restart nginx # See the logs of the api and celery containers, starting with the last # 50 lines. docker-compose logs --tail=50 -f api celery # Stop mist docker-compose stop # Start mist docker-compose start # or even better docker-compose up -d # Stop and remove all containers docker-compose down # Completely remove all containers and data volumes. docker-compose down -v

Migrating from previous versions

Bring down your current installation by running docker-compose down . Download the docker-compose.yml file of the latest release and place it within the same directory as before. This way the new installation will use the same Docker volumes. Run docker-compose up -d to bring up the new version. Check that everything is in order by running docker-compose ps . Also check if your Mist portal works as expected.

Backups

Mist can automatically backup itself to an S3 bucket. To set this up, first create a bucket for the backups on your S3 provider (AWS, MinIO, etc).

Then go to settings/setting.py of your Mist installation and edit the following part accordingly:

BACKUP_INTERVAL = 24 # hours between each backup BACKUP = { 'host' : '' , # eg 'key' : '' , 'secret' : '' , 'bucket' : '' , 'gpg' : { 'recipient' : '' , 'public' : '' , 'private' : '' , } }

Providing a GPG key is optional but strongly recommended. If you provide it, your backups will be encrypted before getting uploaded to your bucket. Mist also offers a set of manual commands for backing up, listing backups and restoring backups:

docker-compose exec api ./bin/backup docker-compose exec api ./bin/list-backups docker-compose exec api ./bin/restore {{myBackupName}}

Backups on time series data stored on VictoriaMetrics will be incremental by default. To perform a full backup, use the --no-incremental flag:

docker-compose exec api ./bin/backup --db victoria --no-incremental

Finally, please keep in mind that backups include MongoDB, InfluxDB & VictoriaMetrics data. Mist logs are stored in Elasticsearch. If you would like to backup these as well, please check out

Monitoring methods

Mist stores monitoring metrics in InfluxDB by default. Since v4.6 it's possible to use VictoriaMetrics instead. You can configure that in settings/settings.py

DEFAULT_MONITORING_METHOD = 'telegraf-victoriametrics'

Restart docker-compose for changes to take effect.

docker-compose restart

Then run the respective migration script.

docker-compose exec api python migrations/0016-migrate-monitoring.py

The above script will update all monitored machines to use the configured monitoring method. It will also update all rules on metrics to use the appropriate query format. It won't migrate past monitoring data between time series databases.

If running on Kubernetes, configure monitoring.defaultMethod in values.yaml instead and use helm to upgrade your release as described above.

Staging version

If you want to install the latest bleeding edge build of mist, run the following:

mkdir mist-ce && cd mist-ce && echo ' MIST_TAG=staging ' > wget docker-compose up -d

Development deployment

If you're planning to modify Mist's source code, an alternative installation method is recommended.

Clone this git repo and all its submodules with something like:

git clone --recursive cd mist-ce docker-compose up -d

This may take some time.

This setup will mount the checked out code into the containers. By cloning the directory, now there's also a file in the current directory in addition to docker-compose.yml and is used to modify the configuration for development mode.

If you're not interested in frontend development, you can comment out the ui & landing sections within the file and re-run docker-compose up -d . Otherwise, you'll also need to install the ui & landing page dependencies before you can access the Mist UI.

Install all front-end dependencies with the following commands

docker-compose exec landing npm install docker-compose exec ui npm install

And then build the landing & ui bundles

docker-compose exec landing npm run build docker-compose exec ui npm run build

When doing front-end development, it's usually more convenient to serve the source code instead of the bundles. To do that, edit settings/settings.py and set JS_BUILD = False . Restart the api container for the changes to take effect

./restart.sh api

The above instructions for running and managing Mist apply.

Cloud and HPC Management Platform

How it works

Waldur MasterMind cloud orchestrator exposes REST API service that enables end-customers to access Marketplace, organization, project and resource management functionality. MasterMind comes with advanced Admin Portal interface for operator-only use.

MasterMind is highly modular. Modules are divided into Core, AuthN, Value-add and Provider groups. For example, Core modules deal with standardized global objects while Provider modules map and handle backend specifics. Core functionality can be extended and customized further with Value-add modules and identity providers are fully pluggable.

Provider modules allow to integrate and connect with in-house infrastructure orchestrators, existing systems and public cloud endpoints.

Waldur HomePort is a state-of-the-art Self-Service Portal targeted at the end customers. It is built as ReactJS application and communicates with MasterMind API service directly from a browser. It provides a graphical user interface for Marketplace catalogue, dashboards, resource management and much more.

HomePort includes Helpdesk functionality for convenient interaction with operator’s support team.

How to Choose a Cloud Management Platform

As more and more companies build internal private clouds or enter the service provider market with public clouds, the more they will need the right set of tools to successfully build, manage and scale their Infrastructure as a Service (IaaS) platform. However – choosing the right technology stack can be a difficult decision. There are several aspects that should be considered, such as planning for future growth and demand, team size, budget, project timeframe, previous experience, available hardware and the underlying infrastructure already in place. In this article, we will focus on the platforms that enable you to provision IaaS – the software which turns your infrastructure into a fully-featured cloud environment – and also look at key factors which can affect your decision-making process and ensure your cloud project is a success.

When it comes to cloud management (or cloud orchestration) platforms, the first thing we need to clarify is what we mean by this. A typical cloud management platform allows one to take existing datacentre infrastructure and wrap around it a common API, CLI and user interface, which allows an organization to benefit from the basic concepts of cloud computing in terms of elasticity, metered usage, self-service and resource pooling.

For some, the first choice to make is whether to go for an open-source or a proprietary/vendor solution. This may not come as a surprise to anyone… but at ShapeBlue, we passionately believe the solution should be 100% open-source, and following years of experience, feedback from users and developers, testing most other solutions and working with the community, we also passionately believe the solution should be Apache CloudStack! As we are regularly asked ‘why?’, we will try to answer that question here.

Proprietary Cloud Management Platforms

There are 2 categories of vendors in the proprietary market which we will review below.

Small, proprietary vendors focused on the service provider market, delivering end-to-end IaaS solutions (eg. OnApp, Flexiant)

Although these solutions can be considered to work “out of the box”, you are usually limiting yourself to a specific technology stack, which they support. A cloud management platform could be orchestrating hundreds of different types of hardware, and different hypervisors and storage types. Broad support for multiple technology stacks is essential, and whilst ‘locking in’ to a vendor or solution might seem like a good idea for the short term, you will probably want to have other options in the longer term.

Small, proprietary vendors might not have the scale to compete with large opensource projects or large vendors in this respect. So you can become dependent on the vendor’s roadmap and you are limiting yourself in terms of the support you will get below and above the stack.

Pros: Quick to deploy, an end-to-end solution aimed at service providers. Support often included

Cons: Locked-in to the solution, limited choices for supported infrastructure, hypervisor, etc.; possible licensing costs.

Large, proprietary vendor solutions (eg. VMware vCloud Director, Red Hat OpenStack)

Large, commercial vendors will provide well known, well supported, enterprise-grade solutions. However, as with the smaller vendors, you could be restricting yourself to using a particular hypervisor or OS and might also need to pay license and support fees.

As already mentioned, when choosing a cloud management platform, it can be wise long-term to avoid vendor or technology lock-in. At some point, you may need to (for example) change the hypervisor, or integrate with a new storage solution. These changes should be possible and straightforward without wholesale changes to your environment.

Pros: Enterprise-grade support, reputation, deployment usually included

Cons: Complex solutions to support and maintain, high capital and operational costs, vendor lock-in

Open-source Cloud Management Platforms

Open-source cloud management platforms are community-driven, not dominated by a single vendor and follow a rapid development process. This community approach (with so many companies and individuals collaborating and contributing) means you will not be pushed in a specific direction from a large company or vendor. When choosing an open-source project you should look for a project with a roadmap, forward momentum and lots of contributors to ensure it supports a broad technology stack now, with plans to develop support as the industry changes.

Some of the more well-known open-source cloud management platforms are:

OpenStack

Freely available cloud software orchestrator, with a wide range of features (virtual networks, bare-metal support, integrated VM and container support, vlan-aware VM support, etc.). However – OpenStack is widely considered to be overly complex and needs a lot of time to deploy and a lot of effort to manage. Most users are currently running vendor distributions from Red Hat, IBM, etc. in production, which makes it effectively a proprietary solution.

Pros: Broad range of modules available and large open-source community; broad hypervisor support

Cons: Time-consuming and difficult to deploy and maintain; installation and configuration of multiple modules required for basic cloud functionality; support is expensive

OpenNebula

Integrates with multiple virtualization technologies, and the Community Edition (CE) is free, open-source and full-featured. However, only the top level of enterprise subscription includes ‘product influence’, and only the enterprise editions are truly suitable for enterprise, production use. It is open-source, but as OpenNebula are both product and vendor, it might be debatable whether you are avoiding vendor lock-in!

Pros: Easy to use and manage. Easy to implement.

Cons: Subscription costs for enterprise editions, potentially limited influence or opportunity for collaboration

Apache CloudStack

Fully featured platform supporting a wide range of integrations. Constantly developing new features and support for new technologies with a clearly defined, evolving roadmap guided by users and the community. No different levels of support or versions, and although there are vendor distributions available, no vendor has a dominant influence over the project and most organizations run the freely available, open-source version in production.

Pros: Easy to implement and manage; large community of active contributors; no vendor lock-in

Cons: Not as widely known as other solutions (eg. OpenStack); documentation could be improved

Having decided between open-source and proprietary, you can now start to narrow down the field even further by focusing on exactly what you need, and start to arrange demos, or even deploy your own test environment.

Deciding on a Cloud Management Platform

Know what you want

Consider your requirements. These might just be “obvious” requirements, such as a common API to internal business units or customers; requirements around data retention; performance; security or specific functional requirements. The more in-depth your understanding of the requirements of your organization you have, the easier it will be to ascertain which platforms are most suitable.

Consider your existing infrastructure

An issue is that many companies do not map their existing infrastructure to their requirements, and start to change parts of their technology stack to accommodate shortfalls in functionality. We recommend you look at your existing infrastructure and consider what solutions will complement that.

Evaluate platforms against the requirements

Undertake a proof of concept (PoC) of different solutions. A PoC should prove (or disprove) whether the technology matches your requirements. If it does not, then question how complex it might be to develop the technology to fit your needs.

Think about the migration process

Be sure the migration of your existing workloads will be relatively straightforward, and consider any unavoidable downtime. This requires planning and testing.

Understand the Total Cost of Ownership (TCO) of each platform

If you are a service provider or public cloud provider, costs are critical to your success. To be competitive, you need to know the TCO of your platform, so that you can correctly price the services you offer. TCO should include costs for licensing, implementation, support, management, administration, etc.

Build a future-proof system

Ensure that the solution you choose will be adaptable to your future requirements. If you anticipate using VMware for 10 years then that’s the solution for you. However – if you anticipate eventually offering your customers different options, then you need a platform that offers those options.

Simple Steps to Make Your Project Successful

Use best practices and design patterns

You do not need to design the most unique infrastructure on the market. But you need to make something which works as expected, matches your requirements and brings a good return on investment.

You are not the first company to do this, so why not benefit from others experiences?

Understand horizontal scale

The resource pooling nature of IaaS allows it to scale very easy. But the scalability should be designed from the very beginning. How are you going to scale all the components of the infrastructure?

Start small, grow big

Having an orchestrated environment gives you a clearer view of capacity planning. The metering/billing capabilities of an orchestrated environment allow you to understand where the costs are delivering value to your organization.

Do not follow the hype

Choosing the CMP is a long-term investment. If you need to change in a few years, it will be huge work and time for your team and organization. Choose something that is here for the long term and will be there in 10 years.

Conclusion

As mentioned at the beginning of this article, we love open-source, and specifically Apache CloudStack. However – that doesn’t mean it’s right for everyone, and all the other products/projects mentioned herein are great solutions and should be considered. Think carefully about what you need, set up demos or trials and look at where the technology is going.

If you have any questions about this blog, or would like to know more about CloudStack, or set up a demo with us, please contact

Leave a Comment