In the current landscape business continuity is about a lot more than natural disasters: Denial of service attacks, ransomware and even network outages can undermine business continuity, and while moving applications to the cloud might seem like a viable solution, cloud providers aren’t immune to outages.
These are just a few of the problems being tackled by the companies featured in this roundup of business-continuity startups. (See how we chose the top 10 companies listed in this roundup.)
You’ll notice that blockchain, machine learning and networking play a big role in their work, and rightly so. Blockchain helps decentralize data; machine learning helps automate backups and recovery; and reliable networking is a must because you can’t have continuity if you can’t access your data.
The startups below offer everything from blockchain-based distributed storage to virtualized storage and data management to swarm-computing-based databases. Any one of them could be the upstart that completely reconfigures the storage and business continuity landscape, which is why we’ll be watching them.
What they do: Provide a hybrid-cloud, open-source storage platform
Year founded: 2015
Funding: $7.5M in Series A funding from Andreessen Horowitz
Headquarters: Berkeley, Calif.
CEO: Haoyuan Li, a Computer Science his Ph.D. from UC-Berkeley who served as a software engineer for Conviva.
Problem they solve: As machine learning, data analytics, and AI make app-to-app communications business critical, business continuity takes on new meaning. Business continuity is now about more than just protecting data and making it available to knowledge workers. Now, business continuity means that data must be usable (unified) and instantly accessible to any application anywhere in the world, in any cloud, on any infrastructure.
Moreover, many analytics, such as with cybersecurity, must be executed in near real-time to have value, meaning any bottleneck exposes your data, applications, and business to risk.
How they solve it: Alluxio’s memory-centric virtual distributed storage system provides global access to all of the data in your enterprise – on-premises, in the cloud or hybrid. Applications have a single point of access to multiple independent storage systems regardless of physical location.
Server-side API translation converts from a client-side interface to any storage interface. Alluxio manages communication between applications and file or object storage, eliminating the need for complex system configuration and management. File data can look like object data and vice versa. If you have multiple versions of HDFS in your enterprise, Alluxio also gives an application the ability to talk to different versions of the same storage.
Alluxio clusters act as a read/write cache for data in connected storage systems. Temporarily storing data in memory or other media near compute accelerates access and provides local performance from remote storage. This capability is even more critical with the movement of compute applications to the cloud and data being located in object stores separate from compute.
Intelligent caching and data management ensure fast performance, data protection, business continuity and high availability. Caching is transparent to the end user and utilizes read/write buffering to maintain continuity with persistent storage. Intelligent cache management utilizes configurable policies for efficient data placement and supports tiered storage for both memory and disk (SSD/HDD).
Alluxio fits within existing frameworks and enforces the security already in place. User authentication, authorization, access-control and data-encryption policies from both applications and storage are applied within Alluxio. Support is provided for multi-tenancy, Active Directory, LDAP, Kerberos and encryption.
Competitors include: Hedvig, Mesosphere, Databricks and Puppet
Customers include: Alibaba, Baidu, Barclay, CERN, ESRI, Google, Huawei, Intel and Juniper
Why they’re a hot startup to watch: Alluxio is the classic case of a computer science student tinkering with the beginnings of a startup while still in school. After graduating, Li turned his lab experiment into the startup Tachyon, which has since been renamed Alluxio.
Alluxio has one round of funding from one investor, but it’s $7.5M from Andreessen Horowitz, which is a good backer to have.
Alluxio has is a long list of impressive customers that include Alibaba, CERN, ESRI, Google and Intel, among others.
The company’s concept of “unified data at memory speed” is compelling. As Big Data, AI, IoT, and other compute/memory/storage-intensive applications continue to penetrate the enterprise, new methods for breaking through bottlenecks will be critical. Moreover, Alluxio’s data unification and intelligent caching make business continuity almost a byproduct of their architecture.
What they do: Provide decentralized data storage that relies on blockchain to protect the integrity of data
Year founded: 2014
Funding: $22.3M, which includes a $1M investment in March 2018 led by NEO Global Capital, and a $19.5M ICO raised in January 2018.
CEO: Pavel Bains, who previously served as CEO of Storypanda and as Studio Director for Threeweave Software
Problem they solve: Centralized database storage creates a number of problems, from heightened risk of data breaches (such as the Equifax, Yahoo!, and British Airways breaches) to a lack of data integrity (how do you know the data hasn’t been tampered with?) to a lack of system reliability (with a centralized system, a single point of failure can knock everything offline).
How they solve it: Bluzelle’s decentralized database service uses blockchain technology to provide software applications with improved security, reliability and integrity of their data. All of the data in the database is replicated based on the concept of swarm computing; the nodes comprising the swarm are geographically dispersed, thus providing business continuity in the event of natural or human-caused disaster.
According to Bluzelle, this decentralized approach is able to scale in step with business needs, rather than forcing you to predict (and pay for) peak capacity. Moreover, since data is diced up and distributed across many non-deterministic locations, security and business continuity are improved. Even in the unlikely eve