Building a GPU-fueled infrastructure service is not a simple matter for a startup to undertake, but that’s precisely what Paperspace has set out to do. Today, it took it to the next level when it announced Gradient, a service platform that eliminates the need to deploy servers for AI and machine learning projects.
Like any serverless architecture, the servers don’t really go away, but the need to deploy them manually does. Gradient provides the means to simply deploy code and Paperspace will take care of all the allocation and management, removing a big piece of the complexity associated with building machine learning models.
Dillon Erb, company co-founder and CEO, says that when they launched the company several years ago, GPUs were not as commonly available as a cloud service as they are today. They initially provided a way to launch GPU instances in virtual machines, something they still do, but they saw a problem around a lack of tooling.
Erb explained that large companies tend to build their own tool sets, but most companies or teams for that matter, don’t have the resources to spend the time to build the underlying plumbing. “Just having raw compute is not sufficient. You need a software stack,” he said.
They spent the last year building Gradient to provide that structure for developers to concentrate on building models and code and collaborating around a project, while leaving the management to Paperspace. It removes the need to have a DevOps team to manage the interactions between the team, the code and the underlying infrastructure.
“Just give us code, a Docker container. You don’t have to schedule a VM because we do it for you. You never have to fire up a machine,” he said.
Paperspace has been trying to solve hard problems around deploying GPUs in the cloud, since it graduated from the Y Combinator Winter 2015 class. It has raised over $11 million in funding since it launched in 2014 including a $4 million seed round in 2016.