Category Archives: Cloud Computing

Microsoft announces new supercomputer, lays out vision for future AI work
19 May

Microsoft announces new supercomputer, lays out vision for future AI work

Microsoft has built one of the top five publicly disclosed supercomputers in the world, making new infrastructure available in Azure to train extremely large artificial intelligence models, the company is announcing at its Build developers conference.

Built-in collaboration with and exclusively for OpenAI, the supercomputer hosted in Azure was designed specifically to train that company’s AI models. It represents a key milestone in a partnership announced last year to jointly create new supercomputing technologies in Azure.

It’s also a first step toward making the next generation of very large AI models and the infrastructure needed to train them available as a platform for other organizations and developers to build upon.

“The exciting thing about these models is the breadth of things they’re going to enable,” said Microsoft Chief Technical Officer Kevin Scott, who said the potential benefits extend far beyond narrow advances in one type of AI model.

“This is about being able to do a hundred exciting things in natural language processing at once and a hundred exciting things in computer vision, and when you start to see combinations of these perceptual domains, you’re going to have new applications that are hard to even imagine right now,” he said.

A new class of multitasking AI models

Machine learning experts have historically built separate, smaller AI models that use many labeled examples to learn a single task such as translating between languages, recognizing objects, reading text to identify key points in an email, or recognizing speech well enough to deliver today’s weather report when asked.

A new class of models developed by the AI research community has proven that some of those tasks can be performed better by a single massive model — one that learns from examining billions of pages of publicly available text, for example. This type of model can so deeply absorb the nuances of language, grammar, knowledge, concepts, and context that it can excel at multiple tasks: summarizing a lengthy speech, moderating content in live gaming chats, finding relevant passages across thousands of legal files or even generating code from scouring GitHub.

As part of a companywide AI at Scale initiative, Microsoft has developed its own family of large AI models, the Microsoft Turing models, which it has used to improve many different language understanding tasks across Bing, Office, Dynamics, and other productivity products.  Earlier this year, it also released to researchers the largest publicly available AI language model in the world, the Microsoft Turing model for natural language generation.

The goal, Microsoft says, is to make its large AI models, training optimization tools, and supercomputing resources available through Azure AI services and GitHub so developers, data scientists, and business customers can easily leverage the power of AI at Scale.

“By now most people intuitively understand how personal computers are a platform — you buy one and it’s not like everything the computer is ever going to do is built into the device when you pull it out of the box,” Scott said.

“That’s exactly what we mean when we say AI is becoming a platform,” he said. “This is about taking a very broad set of data and training a model that learns to do a general set of things and making that model available for millions of developers to go figure out how to do interesting and creative things with.”

Training massive AI models require advanced supercomputing infrastructure or clusters of state-of-the-art hardware connected by high-bandwidth networks. It also needs tools to train the models across these interconnected computers.

The supercomputer developed for OpenAI is a single system with more than 285,000 CPU cores, 10,000 GPUs, and 400 gigabits per second of network connectivity for each GPU server. Compared with other machines listed on the TOP500 supercomputers in the world, it ranks in the top five, Microsoft says. Hosted in Azure, the supercomputer also benefits from all the capabilities of robust modern cloud infrastructure, including rapid deployment, sustainable data centers, and access to Azure services.

“As we’ve learned more and more about what we need and the different limits of all the components that make up a supercomputer, we were really able to say, ‘If we could design our dream system, what would it look like?’” said OpenAI CEO Sam Altman. “And then Microsoft was able to build it.”

OpenAI’s goal is not just to pursue research breakthroughs but also to engineer and develop powerful AI technologies that other people can use, Altman said. The supercomputer developed in partnership with Microsoft was designed to accelerate that cycle.

“We are seeing that larger-scale systems are an important component in training more powerful models,” Altman said.

For customers who want to push their AI ambitions but who don’t require a dedicated supercomputer, Azure AI provides access to powerful computing with the same set of AI accelerators and networks that also power the supercomputer. Microsoft is also making available the tools to train large AI models on these clusters in a distributed and optimized way.

At its Build conference, Microsoft announced that it would soon begin open-sourcing its Microsoft Turing models, as well as recipes for training them in Azure Machine Learning. This will give developers access to the same family of powerful language models that the company has used to improve language understanding across its products.

It also unveiled a new version of DeepSpeed, an open-source deep-learning library for PyTorch that reduces the amount of computing power needed for large distributed model training. The update is significantly more efficient than the version released just three months ago and now allows people to train models more than 15 times larger and 10 times faster than they could without DeepSpeed on the same infrastructure.

Along with the DeepSpeed announcement, Microsoft announced it has added support for distributed training to the ONNX Runtime. The ONNX Runtime is an open-source library designed to enable models to be portable across hardware and operating systems. To date, the ONNX Runtime has focused on high-performance inferencing; today’s update adds support for model training, as well as adding the optimizations from the DeepSpeed library, which enable performance improvements of up to 17 times over the current ONNX Runtime.

“We want to be able to build these very advanced AI technologies that ultimately can be easily used by people to help them get their work done and accomplish their goals more quickly,” said Microsoft principal program manager Phil Waymouth. “These large models are going to be an enormous accelerant.”

Learning the nuances of language

Designing AI models that might one day understand the world more like people do starts with language, a critical component to understanding human intent, making sense of the vast amount of written knowledge in the world and communicating more effortlessly.

Neural network models that can process language, which are roughly inspired by our understanding of the human brain, aren’t new. But these deep learning models are now far more sophisticated than earlier versions and are rapidly escalating in size.

A year ago, the largest models had 1 billion parameters, each loosely equivalent to a synaptic connection in the brain. The Microsoft Turing model for natural language generation now stands as the world’s largest publicly available language AI model with 17 billion parameters.

This new class of models learns differently than supervised learning models that rely on meticulously labeled human-generated data to teach an AI system to recognize a cat or determine whether the answer to a question makes sense.

In what’s known as “self-supervised” learning, these AI models can learn about language by examining billions of pages of publicly available documents on the internet — Wikipedia entries, self-published books, instruction manuals, history lessons, human resources guidelines. In something like a giant game of Mad Libs, words or sentences are removed, and the model has to predict the missing pieces based on the words around it.

As the model does this billions of times, it gets very good at perceiving how words relate to each other. This results in a rich understanding of grammar, concepts, contextual relationships and other building blocks of language. It also allows the same model to transfer lessons learned across many different language tasks, from document understanding to answering questions to creating conversational bots.

“This has enabled things that were seemingly impossible with smaller models,” said Luis Vargas, a Microsoft partner technical advisor who is spearheading the company’s AI at Scale initiative.

The improvements are somewhat like jumping from an elementary reading level to a more sophisticated and nuanced understanding of language. But it’s possible to improve accuracy even further by fine-tuning these large AI models on a more specific language task or exposing them to material that’s specific to a particular industry or company.

“Because every organization is going to have its own vocabulary, people can now easily fine-tune that model to give it a graduate degree in understanding business, healthcare or legal domains,” he said.

AI at Scale

One advantage to the next generation of large AI models is that they only need to be trained once with massive amounts of data and supercomputing resources. A company can take a “pre-trained” model and simply fine-tune for different tasks with much smaller datasets and resources.

The Microsoft Turing model for natural language understanding, for instance, has been used across the company to improve a wide range of product offerings over the last year. It has significantly advanced caption generation and question answering in Bing, improving answers to search questions in some markets by up to 125 percent.

In-Office, the same model has fueled advances in the smart find feature enabling easier searches in Word, the Key Insights feature that extracts important sentences to quickly locate key points in Word, and in Outlook’s Suggested replies feature that automatically generates possible responses to an email. Dynamics 365 Sales Insights also uses it to suggest actions to a seller based on interactions with customers.

Microsoft is also exploring large-scale AI models that can learn in a generalized way across text, images, and video. That could help with automatic captioning of images for accessibility in Office, for instance, or improve the ways people search Bing by understanding what’s inside images and videos.

To train its own models, Microsoft had to develop its own suite of techniques and optimization tools, many of which are now available in the DeepSpeed PyTorch library and ONNX Runtime. These allow people to train very large AI models across many computing clusters and also to squeeze more computing power from the hardware.

That requires partitioning a large AI model into its many layers and distributing those layers across different machines, a process called model parallelism. In a process called data parallelism, Microsoft’s optimization tools also split the huge amount of training data into batches that are used to train multiple instances of the model across the cluster, which are then periodically averaged to produce a single model.

The efficiencies that Microsoft researchers and engineers have achieved in this kind of distributed training will make using large-scale AI models much more resource-efficient and cost-effective for everyone, Microsoft says.

When you’re developing a cloud platform for general use, Scott said, it’s critical to have projects like the OpenAI supercomputing partnership and AI at Scale initiative pushing the cutting edge of performance.

He compares it to the automotive industry developing high-tech innovations for Formula 1 race cars that eventually find their way into the sedans and sport utility vehicles that people drive every day.

“By developing this leading-edge infrastructure for training large AI models, we’re making all of Azure better,” Scott said. “We’re building better computers, better-distributed systems, better networks, better datacenters. All of this makes the performance and cost and flexibility of the entire Azure cloud better.”

Source: https://blogs.microsoft.com/ai/openai-azure-supercomputer/

Optimize Costs and Maximize Control with Private Cloud Computing
08 May

Optimize Costs and Maximize Control with Private Cloud Computing

A private cloud gives you always-on availability and scalability with a long-term cost advantage.

If you have strict requirements for data privacy or resource management or want to optimize costs over the long term, a private cloud hosted either in your data center or by a third-party provider is a smart option.

Business Advantages with the Private Cloud Computing

  • A private cloud can be hosted on infrastructure in your data center or by a third-party provider as a managed private cloud. Both options deliver services to users via the internet.
  • With a private cloud, you get more control over data and resources, support for custom applications that can’t be migrated to the public cloud, and a lower cost over the long term.

When choosing your best cloud deployment model, you’ll need to take into account your unique business needs—including desired CapEx and OpEx, the types of workloads you’ll be running, and your available IT resources.

Many organizations will need some amount of private cloud services. A private cloud is commonly hosted in your data center and maintained by your IT team, with services delivered to your users via the internet. It can also be hosted off-premises by a third-party provider as a managed private cloud.

Benefits of the private cloud

A private cloud gives you more control over how you use computing, storage, and networking. These always-on resources provide on-demand data availability, ensuring reliability and support for mission-critical workloads. You also get more control over security and privacy for data governance. This way, you can ensure compliance with any regulations, such as the European Union’s General Data Protection Regulation (GDPR).

Furthermore, a private cloud allows you to support internally developed applications, protect intellectual property, and support legacy applications that were not built for the public cloud.

It’s also the best path for optimizing your computing costs. Over the long term, running certain workloads on a private cloud can deliver a lower TCO as you deliver more computing power with less physical hardware. However, setting up and maintaining a private cloud on-premises requires a higher cost upfront as you purchase IT infrastructure.

Because private clouds give you both scalability and elasticity, you can respond quickly to changing workload demands. Your IT team can set up a self-service portal and spin up a virtual machine in minutes. They can also enable a single-tenant environment in which software can be customized to meet your organization’s needs.

Private cloud use cases

There are certain scenarios in which private infrastructure is best for hosting cloud services. While these use cases are most common among government, defense, scientific, and engineering organizations, they can also occur in any business, depending on the specific needs. In short, a private cloud is ideal for any use case in which you must do the following:

  • Protect sensitive information, including intellectual property
  • Meet data sovereignty or compliance requirements
  • Ensure high availability, as with mission-critical applications
  • Support internally developed or legacy applications

In some cases, you may want to set up a virtual private cloud, an on-demand pool of computing resources that provides isolation for approved users. This gives you an extra layer of control for privacy and security purposes.

A private cloud gives you more control over your data and resources, support for proprietary or legacy applications, and a better TCO over the long term.

Need help to determine what is the best cloud solution that suits your business?

Call us today at 855-225-4535 for a free consultation or click here

Source: https://www.intel.com/content/www/us/en/cloud-computing/what-is-private-cloud.html

Learn More – Cloud computing: A complete guide

Cloud computing: A complete guide
08 May

Cloud computing: A complete guide

Cloud computing is no longer something new — 94% of companies use it in some form. Cloud computing is today’s standard for competing effectively and speeding up your digital transformation.

What is cloud computing?

Cloud computing, sometimes referred to simply as “cloud,” is the use of computing resources — servers, database management, data storage, networking, software applications, and special capabilities such as blockchain and artificial intelligence (AI) — over the internet, as opposed to owning and operating those resources yourself, on premises.

Compared to traditional IT, cloud computing offers organizations a host of benefits: the cost-effectiveness of paying for only the resources you use; faster time to market for mission-critical applications and services; the ability to scale easily, affordably and — with the right cloud provider — globally; and much more (see “What are the benefits of cloud computing?” below). And many organizations are seeing additional benefits from combining public cloud services purchased from a cloud services provider with private cloud infrastructure they operate themselves to deliver sensitive applications or data to customers, partners and employees.

Increasingly, “cloud computing” is becoming synonymous with “computing.” For example, in a 2019 survey of nearly 800 companies, 94% were using some form of cloud computing (link resides outside WEBSITEFLIX). Many businesses are still in the first stages of their cloud journey, having migrated or deployed about 20% of their applications to the cloud, and are working out the unique security, compliance and geographic implications of moving their remaining mission-critical applications. But move they will: Industry analyst Gartner predicts that more than half of companies using cloud today will move to an all-cloud infrastructure by next year (2021) (link resides outside WEBSITEFLIX).

A brief history of cloud computing

Cloud computing dates back to the 1950s, and over the years, it has evolved through many phases that were first pioneered by IBM, including grid, utility, and on-demand computing.

What are the benefits of cloud computing?

Compared to traditional IT, cloud computing typically enables:

  • Greater cost-efficiency. While traditional IT requires you to purchase computing capacity in anticipation of growth or surges in traffic — a capacity that sits unused until you grow or traffic surges — cloud computing enables you to pay for only the capacity you need when you need it. Cloud also eliminates the ongoing expense of purchasing, housing, maintaining, and managing infrastructure on-premises.
  • Improved agility; faster time to market. On the cloud you can provision and deploy  (“spin up”)  a server in minutes; purchasing and deploying the same server on-premises might take weeks or months.
  • Greater scalability and elasticity. Cloud computing lets you scale workloads automatically — up or down — in response to business growth or surges in traffic. And working with a cloud provider that has data centers spread around the world enables you to scale up or down globally on demand, without sacrificing performance.
  • Improved reliability and business continuity. Because most cloud providers have redundancy built into their global networks, data backup and disaster recovery are typically much easier and less expensive to implement effectively in the cloud than on-premises. Providers who offer packaged disaster recovery solutions— referred to disaster recovery as a service, or DRaaS — make the process even easier, more affordable, and less disruptive.
  • Continually improving performance. The leading cloud service providers regularly update their infrastructure with the latest, highest-performing computing, storage, and networking hardware.
  • Better security, built-in. Traditionally, security concerns have been the leading obstacle for organizations considering cloud adoption. But in response to demand, the security offered by cloud service providers is steadily outstripping on-premises solutions. According to security software provider McAfee, today 52% of companies experience better security in the cloud than on-premises (link resides outside WEBSITEFLIX). Gartner has predicted that by this year (2020), infrastructure as a service (IaaS) cloud workloads will experience 60% fewer security incidents than those in traditional data centers (link resides outside WEBSITEFLIX).

With the right provider, the cloud also offers the added benefit of greater choice and flexibility. Specifically, a cloud provider that supports open standards and a hybrid multi-cloud implementation (see “Multicloud and Hybrid Multicloud” below) gives you the choice and flexibility to combine cloud and on-premises resources from unlimited vendors into a single, optimized, seamlessly integrated infrastructure you can manage from a single point of control — and infrastructure in which each workload runs in the best possible location based on its specific performance, security, regulatory compliance, and cost requirements.

Cloud computing storage

Storage growth continues at a significant rate, driven by new workloads like analytics, video, and mobile applications. While storage demand is increasing, most IT organizations are under continued pressure to lower the cost of their IT infrastructure through the use of shared cloud computing resources. It’s vital for software designers and solution architects to match the specific requirements of their workloads to the appropriate storage solution or, in many enterprise cases, a mix.

One of the biggest advantages of cloud storage is flexibility. A company that has your data or data you want will be able to manage, analyze, add to and transfer it all from a single dashboard — something impossible to do today on storage hardware that sits alone in a data center.

The other major benefit of storage software is that it can access and analyze any kind of data wherever it lives, no matter the hardware, platform, or format. So, from mobile devices linked to your bank to servers full of unstructured social media information, data can be understood via the cloud.

Learn more about cloud storage

The future of cloud

Within the next three years, 75 percent of existing non-cloud apps will move to the cloud. Today’s computing landscape shows companies not only adopting cloud but using more than one cloud environment. Even then, the cloud journey for many has only just begun, moving beyond low-end infrastructure as a service to establish higher business value.

Source: https://www.ibm.com/cloud/learn/cloud-computing

Learn More – Optimize Costs and Maximize Control with Private Cloud Computing