Compute Power for Analytic and ML Workloads

Google trains its output machine learning models on its large data center network and then deploys smaller trained versions of these models to your phone’s hardware for video predictions for example. You can use pre-trained AI building blocks to exploit Google’s AI work. For example, if you’re a film trailer producer and want to quickly detect labels and artifacts in thousands of these film trailers to create a film recommendation system, you may be using a Cloud Video Intelligence rather of creating and training your own custom model, you might be using the API.

There are other professionally qualified language and communication models available too. For Google, running so many sophisticated ML models on broad structured and unstructured datasets required huge investment in computing power.

Google actually has been doing cloud computing for its own projects for over 10 years and has now made the computing power accessible to you via Google Cloud.

Historically, these compute problems could be dealt with by Moore’s Law. Moore’s Law was a tendency in hardware computing that defines the rate at which computational power doubled. Computing capacity has been rising so rapidly for years that you should only wait for it to catch up to the scale of your problem. While computing power has been increasing rapidly, even as recently as eight years ago, growth has slowed significantly over the past few years as manufacturers face fundamental physics limits. Performance in computing has reached a plateau.

Don’t miss these tips!

We don’t spam! Read our privacy policy for more info.

Open chat
Powered by