Google I/O 2022: What to expect from the next developer conference
28 Mar

Google I/O 2022: What to expect from the next developer conference

Google’s annual developer conference is nearly upon us once more – but what could the tech giant be hiding up its sleeve this year?

Every year Google hosts a showcase of its newest software features named Google I/O (meaning Input/Output). The primary purpose is for developers to get to grips with any changes to the likes of the Android smartphone operating system, or the Wear operating system for smartwatches.

However, some previous editions of the event have included hardware launches too. Read on for all that we know so far about the 2022 event.

When is Google I/O 2022?

Google I/O 2022 will be held on Wednesday 11th to Thursday 12th May this year. This is typical for the event, which has taken place in May or June every year since its inception in 2008.

How can I watch THE Google I/O 2022?

Unlike last year’s edition, which was fully virtual, there will be a physical presence for Google I/O 2022 at the Shoreline Amphitheatre in the San Francisco Bay Area.

A limited live audience will be permitted to attend in person, but the rest of us will be able to watch a livestream of the event, which will be completely free and open to everyone.

As soon as the link is live, we will publish it on this page.

What can we expect to see at the Google I/O 2022?

Firstly, we’re certainly going to see a whole lot more of Android 13, the next smartphone operating system. A developer preview has already been launched, with new privacy and security features at the core of this update, and we expect to see more features announced at Google I/O on top of that.

Wear, Google’s software for smartwatches that is supported by the likes of the Samsung Galaxy Watch 4, may also receive some attention; it was Centre stage last year, but this year could see some more tweaks made to the platform.

On top of the software goodies, there could even be a hardware launch as well; the upcoming Google Pixel 6a is likely to make its debut at the event, as an affordable model in Google’s excellent range of smartphones.

Though there had long been rumors of a Pixel Watch being released at some point, the hype seems to have died down a bit for now so we’re not necessarily expecting to see it unveiled any time soon.

Google also announced that it was working to improve the accuracy of skin tones in its cameras (which was later demonstrated on the Pixel 6 series), that you could store digital car keys on your phone.

Click here to register for the Google I/O 2022

Five Years of VR: A Look at the Greatest Moments from Oculus
03 Jan

Five Years of VR: A Look at the Greatest Moments from Oculus

From Meta Editorial,

We released Oculus Rift in March 2016. It was a big moment — the launch of the first consumer VR headset of the modern era. And it was just the start. We’ve released five headsets in the past five years, each driving the technology forward and enabling major improvements in how people interact with VR and the experiences developers can offer.

We wanted to take a moment to celebrate the achievements of so many who have contributed to making VR what it’s today.

Oculus Rift – March 2016

Before Rift, there was the Kickstarter — a crowdfunding campaign to raise money for DK1. By 2014, Facebook had acquired Oculus with a vision of VR as the next computing platform.

Rift was the first step toward that vision, a $599 fabric-covered headset with flip-down headphones, an external sensor, a remote and an Xbox controller.

Caitlin Kalinowski – Head of VR Hardware: “I give credit to Peter [Bristol]’s team. They took a ugly prototype, Crescent Bay and figured out how to package it into a beautiful and elegant piece of consumer electronics. The fabric application, figuring out how to integrate audio into the straps… I think it established what consumer VR would be. Almost all VR since then has been derivative of Rift CV1 in some way. It showed people that VR could be a consumer product.”

Oculus Touch Launch – December 2016

Touch enabled players to essentially bring their hands into the virtual world. That ended up being a key turning point for VR, with games like Robo RecallSUPERHOT VRArizona SunshineThe Unspoken and The Gallery. Exploring the possibilities of this new control scheme and paving the way for Lone EchoBeat SaberAsgard’s Wrath and countless others. By the following summer, Rift and Touch were permanently bundled together.

Peter Bristol – Head of Industrial Design for FRL: “We probably made hundreds of models — just simple sticks with clay, crumpled paper, sanded foams, et cetera — trying to figure out how to get the ergonomics to work. The idea of the controllers becoming your hands set the trajectory of the entire program. We were trying to get your hands into this natural pose while holding the controllers so that your virtual hands matched your real hands. We also developed movements like the grip trigger to be similar to motions you use in the real world, so it would feel intuitive to use.”

Oculus Go – May 2018

Oculus Go reimagined the media-centric Gear VR as an all-in-one device — our first — with better lenses and longer battery life, a sleek new strap-based audio solution, higher-resolution screens and a mainstream-friendly $199 price point.

Matt Dickman – TPM, Health and Safety: “Go was the first time we really thought intentionally about accessibility and what that might mean in VR. You look at the attachment points inside the headset. Those were designed to hold the fabric interface originally — but you could repurpose those mounts to accommodate prescription lens inserts. That sort of thinking around comfort and ergonomics takes a long time (and internal design support), but it led to the accessories you see on Quest 2 today.”

Oculus Quest and Rift S – May 2019

In May 2019, we released two headsets on the same day: Rift S and Quest. Each sported a state-of-the-art inside-out tracking solution (Oculus Insight), higher resolution panels and a $399 price point. Rift S — a refinement of our previous PC efforts and Quest — a groundbreaking all-in-one device that offered a relatively comparable experience without wires or any additional hardware. Making VR more accessible to more people and helping developers reach new, larger audiences in the process.

By the end of 2019, Oculus Link allowed players to connect Quest to a PC for a best-of-both-worlds experience. And in 2019, the addition of an innovative hand tracking solution gave Quest users a glimpse of a more natural and intuitive VR future.

Atman Binstock – Chief Architect of Oculus VR: “I always believed that long-term, standalone VR would be the path forward. The question was more of when. My personal belief was that something like Rift and Touch was the experience we’d need to deliver on standalone. Not the same level of performance as a PC, but the self-presence, interaction, and social presence. And VR’s competing for people’s time with TVs, with laptops. So making the product easier to use, making “time to fun” lower—that’s huge.”

Oculus Quest 2 – October 2020

Our goal with Quest 2 was to give both players and developers a higher-powered and more customizable device — and do it for $100 less. That goal grew to be more difficult when the COVID-19 pandemic hit, but we managed to ship Quest 2 in October and are already so proud of its success and the success it’s brought developers.

Rangaprabhu Parthasarathy – Product Manager, Quest / Quest 2: “When we started Quest 2, we looked at all the things we did on the original Quest and CV1 and said what do we need in a mass market device? Affordable. Easy to use. Think about the setup process. Think about accessibility. Make it available in more places, make it friendly.”

The Next Five Years

What’s next? Michael Abrash has been making predictions about VR’s future since the first Oculus Connect, so we asked him. You can read his full response in our full oral history, but the short answer is:

Michael Abrash – Chief Scientist, Facebook Reality Labs: “We are at the very beginning. All this innovation, all this invention still has to happen with VR. Early VR rode on the back of other work that had been done. The cameras were cellphone cameras and the optics were basically off-the-shelf optics initially. Going from this point forward, we’re the ones who are developing it—and that’s exciting. It’s good. But it is also really, really challenging on the innovation front. People should realize that we’ve come a long way and we’ve done a great job — but this road stretches out for the rest of their lifetimes.”

We’re grateful to everyone who’s helped make VR real over the last five-plus years, including everyone who’s bought a headset or even shared a headset with their friends and family. Here’s to five more years and beyond.

To read the full oral history visit: tech.fb.com/five-years-of-vr-an-oral-history-from-oculus-rift-to-quest-2/

Source: https://about.fb.com/news/2021/03/five-years-of-vr-a-look-at-the-greatest-moments-from-oculus/

Windows 11 available on October 5
03 Sep

Windows 11 available on October 5

Today, we are thrilled to announce Windows 11 will start to become available on October 5, 2021. On this day, the free upgrade to Windows 11 will begin rolling out to eligible Windows 10 PCs and PCs that come pre-loaded with Windows 11 will start to become available for purchase. A new Windows experience, Windows 11 is designed to bring you closer to what you love.

As the PC continues to play a more central role in our lives than ever before — Windows 11 is ready to empower your productivity and inspire your creativity.

Here are 11 highlights of this release

  1. The new design and sounds are modern, fresh, clean and beautiful, bringing you a sense of calm and ease.
  2. With Start, we’ve put you and your content at the center. Start utilizes the power of the cloud and Microsoft 365 to show you your recent files no matter what device you were viewing them on.
  3. Snap Layouts, Snap Groups and Desktops provide an even more powerful way to multitask and optimize your screen real estate.
  4. Chat from Microsoft Teams integrated into the taskbar provides a faster way to connect to the people you care about.
  5. Widgets, a new personalized feed powered by AI, provides a faster way to access the information you care about, and with Microsoft Edge’s world class performance, speed and productivity features you can get more done on the web.
  6. Windows 11 delivers the best Windows ever for gaming and unlocks the full potential of your system’s hardware with technology like DirectX12 Ultimate, DirectStorage and Auto HDR. With Xbox Game Pass for PC or Ultimate you get access to over 100 high-quality PC games to play on Windows 11 for one low monthly price. (Xbox Game Pass sold separately.)
  7. Windows 11 comes with a new Microsoft Store rebuilt with an all-new design making it easier to search and discover your favorite apps, games, shows, and movies in one trusted location. We look forward to continuing our journey to bring Android apps to Windows 11 and the Microsoft Store through our collaboration with Amazon and Intel; this will start with a preview for Windows Insiders over the coming months.
  8. Windows 11 is the most inclusively designed version of Windows with new accessibility improvements that were built for and by people with disabilities.
  9. Windows 11 unlocks new opportunities for developers and creators. We are opening the Store to allow more developers and independent software vendors (ISVs) to bring their apps to the Store, improving native and web app development with new developer tools, and making it easier for you to refresh the look and feel across all our app designs and experiences.
  10. Windows 11 is optimized for speed, efficiency and improved experiences with touch, digital pen and voice input.
  11. Windows 11 is the operating system for hybrid work, delivering new experiences that work how you work, are secure by design, and easy and familiar for IT to deploy and manage. Businesses can also test Windows 11 in preview today in Azure Virtual Desktop, or at general availability by experiencing Windows 11 in the new Windows 365.

Thank you to the Windows Insider Community

The Windows Insider community has been an invaluable community in helping us get to where we are today. Since the first Insider Preview Build was released in June, the engagement and feedback has been unprecedented. The team has also enjoyed sharing more behind the scenes stories on the development of Windows 11 in a new series we launched in June, Inside Windows 11. We sincerely appreciate the energy and enthusiasm from this community.

Rolling out the free upgrade to Windows 11 in a phased and measured approach

The free upgrade to Windows 11 starts on October 5 and will be phased and measured with a focus on quality. Following the tremendous learnings from Windows 10, we want to make sure we’re providing you with the best possible experience. That means new eligible devices will be offered the upgrade first. The upgrade will then roll out over time to in-market devices based on intelligence models that consider hardware eligibility, reliability metrics, age of device and other factors that impact the upgrade experience. We expect all eligible devices to be offered the free upgrade to Windows 11 by mid-2022. If you have a Windows 10 PC that’s eligible for the upgrade, Windows Update will let you know when it’s available. You can also check to see if Windows 11 is ready for your device by going to Settings > Windows Update and select Check for updates*.

Ready to elevate to 11? There’s never been a better time to purchase a new PC

October 5 is right around the corner — and there are a few things you can do to get ready for Windows 11. First, if you’re in need of a new PC now — don’t wait. You can get all the power and performance of a new Windows 10 PC and upgrade to Windows 11 for free after the rollout begins on October 5**.

We’ve worked closely with our OEM and retail partners to bring you powerful Windows 10 PCs today, that will take you into the future with Windows 11. Here are a few to check out.

Source: https://blogs.windows.com/windowsexperience/2021/08/31/windows-11-available-on-october-5/

Google’s Core Web Vitals to Become Ranking Signals
02 Jun

Google’s Core Web Vitals to Become Ranking Signals

Google announces an upcoming change to search rankings that will incorporate Core Web Vitals as a ranking signal.

Search signals for page experience

“The page experience signal measures aspects of how users perceive the experience of interacting with a web page. Optimizing for these factors makes the web more delightful for users across all web browsers and surfaces, and helps sites evolve towards user expectations on mobile.”

Google is introducing a new ranking signal, which combines Core Web Vitals with existing user experience signals, to improve the way it evaluates the overall experience provided by a page.

This new ranking signal is in the early stages of development is not scheduled to launch until at least next year.

To help site owners prepare, Google has provided an early look at the work being done so far.

The New ‘Page Experience’ Signal

The upcoming ranking signal will be known as the page experience signal.

The page experience signal consists of the Core Web Vitals, as well as these existing page experience metrics:

  • Mobile-friendliness
  • Safe-browsing
  • HTTPS-security
  • Intrusive interstitial guidelines

Core Web Vitals

Core Web Vitals, introduced earlier this month, are a set of metrics related to speed, responsiveness, and visual stability.

Google has defined these as the Core Web Vitals:

  • Largest Contentful Paint: The time it takes for a page’s main content to load. An ideal LCP measurement is 2.5 seconds or faster.
  • First Input Delay: The time it takes for a page to become interactive. An ideal measurement is less than 100 seconds.
  • Cumulative Layout Shift: The amount of unexpected layout shift of visual page content. An ideal measurement is less than 0.1.

This set of metrics was designed to help site owners measure the user experience they’re providing when it comes to loading, interactivity, and visual stability.

Core Web Vitals are not set in stone – which means they may change from year to year depending on what users expect out of a good web page experience.

For now, the Core Web Vitals are what is listed above. Google will certainly update the public if and when these metrics change.

For more details about Core Web Vitals, see our full report from when they were first introduced.

Page Experience Signal & Ranking

By adding Core Web Vitals as ranking factors, and combining them with other user experience signals, Google aims to help more site owners build pages that users enjoy visiting.

If Google determines that a page is providing a high-quality user experience, based on its page experience signal, then it will likely rank the page higher in search results.

However, content relevance is still considered important when it comes to rankings. A page with content that’s highly relevant to a query could conceivably rank well even if it had a poor page experience signal.

The opposite is also true, as Google states:

“A good page experience doesn’t override having great, relevant content. However, in cases where there are multiple pages that have similar content, page experience becomes much more important for visibility in Search.”

As Google mentions, a page experience signal is a tie-breaker of sorts. Meaning if there are two pages both providing excellent content, the one with the stronger page experience signal will rank higher in search results.

So don’t get so hung up on optimizing for page experience that the actual content on the page starts to suffer. Great content can, in theory, outrank a great page experience.

Evaluating Page Experience

As of yet, there is no specific tool for evaluating page experience as a whole.

Although it is possible to measure the individual components that go into creating the page experience signal.

Measuring Core Web Vitals

When it comes to measuring Core Web Vitals, SEOs and site owners can use a variety of Google’s own tools such as:

  • Search Console
  • PageSpeed Insights
  • Lighthouse
  • Chrome DevTools
  • Chrome UX report
  • And more

Soon a plugin for the Chrome browser will also be available to quickly evaluate the Core Web Vitals of any page you’re looking at. Google is also working with third-parties to bring Core Web Vitals to other tools.

Measuring other user experience signals

Here’s how SEOs and site owners can measure the other type of user experience signals:

  • Mobile-friendliness: Use Google’s mobile-friendly test.
  • Safe-browsing: Check the Security Issues report in Search Console for any issues with safe browsing.
  • HTTPS: If a page is served over a secure HTTPS connection then it will display a lock icon in the browser address bar.
  • Intrusive interstitial guidelines: This one is a bit trickier. Contact Us to what counts as an intrusive interstitial.

When Will These Changes Happen?

There is no need to take immediate action, Google says, as these changes will not happen before next year.

Google will provide at least 6 months’ notice before they are rolled out.

The company is simply giving site owners a heads up in an effort to keep people informed about ranking changes as early as possible.

Source: https://www.searchenginejournal.com/googles-core-web-vitals-ranking-signal/370719/

Source: https://webmasters.googleblog.com/2020/05/evaluating-page-experience.html

Microsoft announces new supercomputer, lays out vision for future AI work
19 May

Microsoft announces new supercomputer, lays out vision for future AI work

Microsoft has built one of the top five publicly disclosed supercomputers in the world, making new infrastructure available in Azure to train extremely large artificial intelligence models, the company is announcing at its Build developers conference.

Built-in collaboration with and exclusively for OpenAI, the supercomputer hosted in Azure was designed specifically to train that company’s AI models. It represents a key milestone in a partnership announced last year to jointly create new supercomputing technologies in Azure.

It’s also a first step toward making the next generation of very large AI models and the infrastructure needed to train them available as a platform for other organizations and developers to build upon.

“The exciting thing about these models is the breadth of things they’re going to enable,” said Microsoft Chief Technical Officer Kevin Scott, who said the potential benefits extend far beyond narrow advances in one type of AI model.

“This is about being able to do a hundred exciting things in natural language processing at once and a hundred exciting things in computer vision, and when you start to see combinations of these perceptual domains, you’re going to have new applications that are hard to even imagine right now,” he said.

A new class of multitasking AI models

Machine learning experts have historically built separate, smaller AI models that use many labeled examples to learn a single task such as translating between languages, recognizing objects, reading text to identify key points in an email, or recognizing speech well enough to deliver today’s weather report when asked.

A new class of models developed by the AI research community has proven that some of those tasks can be performed better by a single massive model — one that learns from examining billions of pages of publicly available text, for example. This type of model can so deeply absorb the nuances of language, grammar, knowledge, concepts, and context that it can excel at multiple tasks: summarizing a lengthy speech, moderating content in live gaming chats, finding relevant passages across thousands of legal files or even generating code from scouring GitHub.

As part of a companywide AI at Scale initiative, Microsoft has developed its own family of large AI models, the Microsoft Turing models, which it has used to improve many different language understanding tasks across Bing, Office, Dynamics, and other productivity products.  Earlier this year, it also released to researchers the largest publicly available AI language model in the world, the Microsoft Turing model for natural language generation.

The goal, Microsoft says, is to make its large AI models, training optimization tools, and supercomputing resources available through Azure AI services and GitHub so developers, data scientists, and business customers can easily leverage the power of AI at Scale.

“By now most people intuitively understand how personal computers are a platform — you buy one and it’s not like everything the computer is ever going to do is built into the device when you pull it out of the box,” Scott said.

“That’s exactly what we mean when we say AI is becoming a platform,” he said. “This is about taking a very broad set of data and training a model that learns to do a general set of things and making that model available for millions of developers to go figure out how to do interesting and creative things with.”

Training massive AI models require advanced supercomputing infrastructure or clusters of state-of-the-art hardware connected by high-bandwidth networks. It also needs tools to train the models across these interconnected computers.

The supercomputer developed for OpenAI is a single system with more than 285,000 CPU cores, 10,000 GPUs, and 400 gigabits per second of network connectivity for each GPU server. Compared with other machines listed on the TOP500 supercomputers in the world, it ranks in the top five, Microsoft says. Hosted in Azure, the supercomputer also benefits from all the capabilities of robust modern cloud infrastructure, including rapid deployment, sustainable data centers, and access to Azure services.

“As we’ve learned more and more about what we need and the different limits of all the components that make up a supercomputer, we were really able to say, ‘If we could design our dream system, what would it look like?’” said OpenAI CEO Sam Altman. “And then Microsoft was able to build it.”

OpenAI’s goal is not just to pursue research breakthroughs but also to engineer and develop powerful AI technologies that other people can use, Altman said. The supercomputer developed in partnership with Microsoft was designed to accelerate that cycle.

“We are seeing that larger-scale systems are an important component in training more powerful models,” Altman said.

For customers who want to push their AI ambitions but who don’t require a dedicated supercomputer, Azure AI provides access to powerful computing with the same set of AI accelerators and networks that also power the supercomputer. Microsoft is also making available the tools to train large AI models on these clusters in a distributed and optimized way.

At its Build conference, Microsoft announced that it would soon begin open-sourcing its Microsoft Turing models, as well as recipes for training them in Azure Machine Learning. This will give developers access to the same family of powerful language models that the company has used to improve language understanding across its products.

It also unveiled a new version of DeepSpeed, an open-source deep-learning library for PyTorch that reduces the amount of computing power needed for large distributed model training. The update is significantly more efficient than the version released just three months ago and now allows people to train models more than 15 times larger and 10 times faster than they could without DeepSpeed on the same infrastructure.

Along with the DeepSpeed announcement, Microsoft announced it has added support for distributed training to the ONNX Runtime. The ONNX Runtime is an open-source library designed to enable models to be portable across hardware and operating systems. To date, the ONNX Runtime has focused on high-performance inferencing; today’s update adds support for model training, as well as adding the optimizations from the DeepSpeed library, which enable performance improvements of up to 17 times over the current ONNX Runtime.

“We want to be able to build these very advanced AI technologies that ultimately can be easily used by people to help them get their work done and accomplish their goals more quickly,” said Microsoft principal program manager Phil Waymouth. “These large models are going to be an enormous accelerant.”

Learning the nuances of language

Designing AI models that might one day understand the world more like people do starts with language, a critical component to understanding human intent, making sense of the vast amount of written knowledge in the world and communicating more effortlessly.

Neural network models that can process language, which are roughly inspired by our understanding of the human brain, aren’t new. But these deep learning models are now far more sophisticated than earlier versions and are rapidly escalating in size.

A year ago, the largest models had 1 billion parameters, each loosely equivalent to a synaptic connection in the brain. The Microsoft Turing model for natural language generation now stands as the world’s largest publicly available language AI model with 17 billion parameters.

This new class of models learns differently than supervised learning models that rely on meticulously labeled human-generated data to teach an AI system to recognize a cat or determine whether the answer to a question makes sense.

In what’s known as “self-supervised” learning, these AI models can learn about language by examining billions of pages of publicly available documents on the internet — Wikipedia entries, self-published books, instruction manuals, history lessons, human resources guidelines. In something like a giant game of Mad Libs, words or sentences are removed, and the model has to predict the missing pieces based on the words around it.

As the model does this billions of times, it gets very good at perceiving how words relate to each other. This results in a rich understanding of grammar, concepts, contextual relationships and other building blocks of language. It also allows the same model to transfer lessons learned across many different language tasks, from document understanding to answering questions to creating conversational bots.

“This has enabled things that were seemingly impossible with smaller models,” said Luis Vargas, a Microsoft partner technical advisor who is spearheading the company’s AI at Scale initiative.

The improvements are somewhat like jumping from an elementary reading level to a more sophisticated and nuanced understanding of language. But it’s possible to improve accuracy even further by fine-tuning these large AI models on a more specific language task or exposing them to material that’s specific to a particular industry or company.

“Because every organization is going to have its own vocabulary, people can now easily fine-tune that model to give it a graduate degree in understanding business, healthcare or legal domains,” he said.

AI at Scale

One advantage to the next generation of large AI models is that they only need to be trained once with massive amounts of data and supercomputing resources. A company can take a “pre-trained” model and simply fine-tune for different tasks with much smaller datasets and resources.

The Microsoft Turing model for natural language understanding, for instance, has been used across the company to improve a wide range of product offerings over the last year. It has significantly advanced caption generation and question answering in Bing, improving answers to search questions in some markets by up to 125 percent.

In-Office, the same model has fueled advances in the smart find feature enabling easier searches in Word, the Key Insights feature that extracts important sentences to quickly locate key points in Word, and in Outlook’s Suggested replies feature that automatically generates possible responses to an email. Dynamics 365 Sales Insights also uses it to suggest actions to a seller based on interactions with customers.

Microsoft is also exploring large-scale AI models that can learn in a generalized way across text, images, and video. That could help with automatic captioning of images for accessibility in Office, for instance, or improve the ways people search Bing by understanding what’s inside images and videos.

To train its own models, Microsoft had to develop its own suite of techniques and optimization tools, many of which are now available in the DeepSpeed PyTorch library and ONNX Runtime. These allow people to train very large AI models across many computing clusters and also to squeeze more computing power from the hardware.

That requires partitioning a large AI model into its many layers and distributing those layers across different machines, a process called model parallelism. In a process called data parallelism, Microsoft’s optimization tools also split the huge amount of training data into batches that are used to train multiple instances of the model across the cluster, which are then periodically averaged to produce a single model.

The efficiencies that Microsoft researchers and engineers have achieved in this kind of distributed training will make using large-scale AI models much more resource-efficient and cost-effective for everyone, Microsoft says.

When you’re developing a cloud platform for general use, Scott said, it’s critical to have projects like the OpenAI supercomputing partnership and AI at Scale initiative pushing the cutting edge of performance.

He compares it to the automotive industry developing high-tech innovations for Formula 1 race cars that eventually find their way into the sedans and sport utility vehicles that people drive every day.

“By developing this leading-edge infrastructure for training large AI models, we’re making all of Azure better,” Scott said. “We’re building better computers, better-distributed systems, better networks, better datacenters. All of this makes the performance and cost and flexibility of the entire Azure cloud better.”

Source: https://blogs.microsoft.com/ai/openai-azure-supercomputer/

Best Practices for Effective Video Conferencing
17 May

Best Practices for Effective Video Conferencing

To make your video conferencing meetings more productive and rewarding for everyone, review the general video conferencing best practices, and learn how to improve the experience whether you are an onsite participant or a remote participant.

Video conferencing best practices

Follow these tips to ensure a more successful video conferencing meeting.

Prior to a meeting:

  • When using equipment or locations not regularly used, test your meeting connections in advance.
  • When possible, establish online video conferencing connections several minutes before the meeting start time.
  • Create a backup communication plan in case you have trouble connecting with remote participants. A backup plan can include asking onsite participants to connect to the meeting through their laptops, using a mobile or speakerphone, and/or collaborating through an online collaboration tool (e.g., Google docs).

During a meeting:

  • Have all participants share their video and audio. No lurkers.
    • Ensure all participants can see and hear all other participants, as appropriate.
    • Ensure conference room microphones are distributed appropriately to pick up all speakers.
    • Ensure location lighting does not limit a participant’s visibility (e.g., avoid backlighting from windows or lamps).
  • Have participants mute their microphones if their location has excessive background noise or they will not be speaking.
  • Have a meeting facilitator — often, but not always, the person who called the meeting.

The facilitator is responsible for:

  • Providing an agenda to participants — ahead of the meeting is nice, but minimally at the start of the meeting — that includes an overview of topics to be covered and planned outcome;
  • Establishing the visual or verbal cues, such as raising a hand, to indicate when someone wants to actively contribute verbally to the meeting;
  • Engaging participants at all locations to ensure discussion understanding, and alignment;
  • Limiting “side conversations” and multitasking or ensure all participants are made aware of that content;
  • Make sure all participants have equal access to content by sharing all content within the video conferencing connection and using online tools (e.g., Google docs) whenever possible.

Tips to improve a video conferencing meeting if you are onsite

Follow these steps to connect an H.323 or SIP-based room system to a video conferencing meeting.

After you connect with the video conferencing software, you will see a splash screen and be prompted to enter your meeting ID.

Enter the meeting ID that is listed on your meeting invitation email.

The video conferencing software then connects your room system to the meeting.

See the following Zoom video for tips on setting up a room for video conferencing.

Tips to improve a video conferencing meeting if you are remote

If you participate remotely in a video conference, follow these instructions to ensure the best experience.

  1. Try to connect via a wired Ethernet cable. This prevents WiFi dropouts and speed issues.
  2. If connecting from a laptop, plug in the laptop wall power. Battery use can adversely affect video quality.
  3. Test the connection before the call; this is strongly recommended.
    • If you use Zoom: Go to the Zoom site to test your audio connection or test your video connection.
    • If you use WebEx: Go to your WebEx Personal Room. Test your audio connection using the Audio pull-down menu. Test your video connection by viewing the screen in your Personal Room.
  4. Ensure that you have a camera, microphone, and headphones or speakers available. Earbuds or headphones are preferable to avoid audio feedback and echo. Most modern laptops and all-in-one desktops have a headphone jack, microphone, and speakers built-in.
  5. Be aware of your surroundings and how you appear visually.
    • Call from a quiet location with no background noise.
    • Close blinds on windows so that you are easier to see on the video.
    • Wear neutral, solid-colored clothing. Avoid black, white, or striped clothing.
  6. Be aware of your behavior. Because you are in a video conference, people can see what you are doing at all times.
  7. Follow all instructions in the video conferencing invitation and note important supplemental information, such as a backup phone number in case you are disconnected.

Source: https://uit.stanford.edu/videoconferencing/best-practices

Optimize Costs and Maximize Control with Private Cloud Computing
08 May

Optimize Costs and Maximize Control with Private Cloud Computing

A private cloud gives you always-on availability and scalability with a long-term cost advantage.

If you have strict requirements for data privacy or resource management or want to optimize costs over the long term, a private cloud hosted either in your data center or by a third-party provider is a smart option.

Business Advantages with the Private Cloud Computing

  • A private cloud can be hosted on infrastructure in your data center or by a third-party provider as a managed private cloud. Both options deliver services to users via the internet.
  • With a private cloud, you get more control over data and resources, support for custom applications that can’t be migrated to the public cloud, and a lower cost over the long term.

When choosing your best cloud deployment model, you’ll need to take into account your unique business needs—including desired CapEx and OpEx, the types of workloads you’ll be running, and your available IT resources.

Many organizations will need some amount of private cloud services. A private cloud is commonly hosted in your data center and maintained by your IT team, with services delivered to your users via the internet. It can also be hosted off-premises by a third-party provider as a managed private cloud.

Benefits of the private cloud

A private cloud gives you more control over how you use computing, storage, and networking. These always-on resources provide on-demand data availability, ensuring reliability and support for mission-critical workloads. You also get more control over security and privacy for data governance. This way, you can ensure compliance with any regulations, such as the European Union’s General Data Protection Regulation (GDPR).

Furthermore, a private cloud allows you to support internally developed applications, protect intellectual property, and support legacy applications that were not built for the public cloud.

It’s also the best path for optimizing your computing costs. Over the long term, running certain workloads on a private cloud can deliver a lower TCO as you deliver more computing power with less physical hardware. However, setting up and maintaining a private cloud on-premises requires a higher cost upfront as you purchase IT infrastructure.

Because private clouds give you both scalability and elasticity, you can respond quickly to changing workload demands. Your IT team can set up a self-service portal and spin up a virtual machine in minutes. They can also enable a single-tenant environment in which software can be customized to meet your organization’s needs.

Private cloud use cases

There are certain scenarios in which private infrastructure is best for hosting cloud services. While these use cases are most common among government, defense, scientific, and engineering organizations, they can also occur in any business, depending on the specific needs. In short, a private cloud is ideal for any use case in which you must do the following:

  • Protect sensitive information, including intellectual property
  • Meet data sovereignty or compliance requirements
  • Ensure high availability, as with mission-critical applications
  • Support internally developed or legacy applications

In some cases, you may want to set up a virtual private cloud, an on-demand pool of computing resources that provides isolation for approved users. This gives you an extra layer of control for privacy and security purposes.

A private cloud gives you more control over your data and resources, support for proprietary or legacy applications, and a better TCO over the long term.

Need help to determine what is the best cloud solution that suits your business?

Call us today at 855-225-4535 for a free consultation or click here

Source: https://www.intel.com/content/www/us/en/cloud-computing/what-is-private-cloud.html

Learn More – Cloud computing: A complete guide

Cloud computing: A complete guide
08 May

Cloud computing: A complete guide

Cloud computing is no longer something new — 94% of companies use it in some form. Cloud computing is today’s standard for competing effectively and speeding up your digital transformation.

What is cloud computing?

Cloud computing, sometimes referred to simply as “cloud,” is the use of computing resources — servers, database management, data storage, networking, software applications, and special capabilities such as blockchain and artificial intelligence (AI) — over the internet, as opposed to owning and operating those resources yourself, on premises.

Compared to traditional IT, cloud computing offers organizations a host of benefits: the cost-effectiveness of paying for only the resources you use; faster time to market for mission-critical applications and services; the ability to scale easily, affordably and — with the right cloud provider — globally; and much more (see “What are the benefits of cloud computing?” below). And many organizations are seeing additional benefits from combining public cloud services purchased from a cloud services provider with private cloud infrastructure they operate themselves to deliver sensitive applications or data to customers, partners and employees.

Increasingly, “cloud computing” is becoming synonymous with “computing.” For example, in a 2019 survey of nearly 800 companies, 94% were using some form of cloud computing (link resides outside WEBSITEFLIX). Many businesses are still in the first stages of their cloud journey, having migrated or deployed about 20% of their applications to the cloud, and are working out the unique security, compliance and geographic implications of moving their remaining mission-critical applications. But move they will: Industry analyst Gartner predicts that more than half of companies using cloud today will move to an all-cloud infrastructure by next year (2021) (link resides outside WEBSITEFLIX).

A brief history of cloud computing

Cloud computing dates back to the 1950s, and over the years, it has evolved through many phases that were first pioneered by IBM, including grid, utility, and on-demand computing.

What are the benefits of cloud computing?

Compared to traditional IT, cloud computing typically enables:

  • Greater cost-efficiency. While traditional IT requires you to purchase computing capacity in anticipation of growth or surges in traffic — a capacity that sits unused until you grow or traffic surges — cloud computing enables you to pay for only the capacity you need when you need it. Cloud also eliminates the ongoing expense of purchasing, housing, maintaining, and managing infrastructure on-premises.
  • Improved agility; faster time to market. On the cloud you can provision and deploy  (“spin up”)  a server in minutes; purchasing and deploying the same server on-premises might take weeks or months.
  • Greater scalability and elasticity. Cloud computing lets you scale workloads automatically — up or down — in response to business growth or surges in traffic. And working with a cloud provider that has data centers spread around the world enables you to scale up or down globally on demand, without sacrificing performance.
  • Improved reliability and business continuity. Because most cloud providers have redundancy built into their global networks, data backup and disaster recovery are typically much easier and less expensive to implement effectively in the cloud than on-premises. Providers who offer packaged disaster recovery solutions— referred to disaster recovery as a service, or DRaaS — make the process even easier, more affordable, and less disruptive.
  • Continually improving performance. The leading cloud service providers regularly update their infrastructure with the latest, highest-performing computing, storage, and networking hardware.
  • Better security, built-in. Traditionally, security concerns have been the leading obstacle for organizations considering cloud adoption. But in response to demand, the security offered by cloud service providers is steadily outstripping on-premises solutions. According to security software provider McAfee, today 52% of companies experience better security in the cloud than on-premises (link resides outside WEBSITEFLIX). Gartner has predicted that by this year (2020), infrastructure as a service (IaaS) cloud workloads will experience 60% fewer security incidents than those in traditional data centers (link resides outside WEBSITEFLIX).

With the right provider, the cloud also offers the added benefit of greater choice and flexibility. Specifically, a cloud provider that supports open standards and a hybrid multi-cloud implementation (see “Multicloud and Hybrid Multicloud” below) gives you the choice and flexibility to combine cloud and on-premises resources from unlimited vendors into a single, optimized, seamlessly integrated infrastructure you can manage from a single point of control — and infrastructure in which each workload runs in the best possible location based on its specific performance, security, regulatory compliance, and cost requirements.

Cloud computing storage

Storage growth continues at a significant rate, driven by new workloads like analytics, video, and mobile applications. While storage demand is increasing, most IT organizations are under continued pressure to lower the cost of their IT infrastructure through the use of shared cloud computing resources. It’s vital for software designers and solution architects to match the specific requirements of their workloads to the appropriate storage solution or, in many enterprise cases, a mix.

One of the biggest advantages of cloud storage is flexibility. A company that has your data or data you want will be able to manage, analyze, add to and transfer it all from a single dashboard — something impossible to do today on storage hardware that sits alone in a data center.

The other major benefit of storage software is that it can access and analyze any kind of data wherever it lives, no matter the hardware, platform, or format. So, from mobile devices linked to your bank to servers full of unstructured social media information, data can be understood via the cloud.

Learn more about cloud storage

The future of cloud

Within the next three years, 75 percent of existing non-cloud apps will move to the cloud. Today’s computing landscape shows companies not only adopting cloud but using more than one cloud environment. Even then, the cloud journey for many has only just begun, moving beyond low-end infrastructure as a service to establish higher business value.

Source: https://www.ibm.com/cloud/learn/cloud-computing

Learn More – Optimize Costs and Maximize Control with Private Cloud Computing

Identifying and Avoiding COVID-19 Scams
22 Apr

Identifying and Avoiding COVID-19 Scams

Are you working from home or attending school online during the Coronavirus (COVID-19) pandemic? Be cautious of cybercriminals.

During this time of social distancing, people spend more time on their phones and computers for home, work, shopping, and entertainment. Cybercriminals take advantage of widespread fear, panic, and worry. They may use your extra screen time and time at home as an opportunity.

Protect yourself by being aware of different types of scams.

According to the U.S. Department of Justice, the Federal Trade Commission (FTC) and the Federal Communications Commission (FCC), there are several ways scammers will use COVID-19 to target people.

  • Vaccine and treatment scams. Scammers may advertise fake cures, vaccines, and advice on unproven treatments for COVID-19.
  • Shopping Scams. Scammers may create fake stores, e-commerce websites, social media accounts, and email addresses claiming to sell medical supplies currently in high demand. Supplies might include things like hand sanitizer, toilet paper, and surgical masks. Scammers will keep your money but never provide you with the merchandise.
  • Medical scams. Scammers may call and email people pretending to be doctors and hospitals that have treated a friend or relative for COVID-19 and demand payment for treatment.
  • Charity scams. Scammers sometimes ask for donations for people and groups affected by COVID-19.
  • Phishing and Malware scams. During the COVID-19 crisis, phishing and malware scams may be used to gain access to your computer or to steal your credentials.
    • Malware is malicious software such as spyware, ransomware, or viruses that can gain access to your computer system without you knowing. Malware can be activated when you click on email attachments or install risky software.
    • When Phishing is used, bad actors send false communications from what appears to be a trustworthy source to try to convince you to share sensitive data such as passwords or credit card information.
    • For example, scammers may pose as national and global health authorities, including the World Health Organization (WHO) and the Centers for Disease Control and Prevention (CDC) and send phishing emails designed to trick you into downloading malware or providing your personal and financial information.
  •  App scamsScammers may create mobile apps designed to track the spread of COVID-19 and insert malware into that app, which will compromise users’ devices and personal information.
  • Investment scamsScammers may offer online promotions on things like social media, claiming that products or services of publicly traded companies can prevent, detect, or cure COVID-19, causing the stock of these companies to dramatically increase in value as a result.

(Source: U.S. Department of Justice)  

Malicious Domains and Files Related to Zoom Increase, ‘Zoom Bombing’ on the Rise
05 Apr

Malicious Domains and Files Related to Zoom Increase, ‘Zoom Bombing’ on the Rise

Threat actors take advantage of the increased usage of video conferencing apps is reflected in the rise of malicious domains and files related to Zoom application. Cases of “Zoom bombing” has been witnessed as well. The use of Zoom and other video conferencing platforms has increased since many companies have transitioned to a work-from-home setup due to the coronavirus (COVID-19) outbreak.

Registrations of domains that reference the name of Zoom has significantly increased, according to Check Point Research. More than 1,700 new domains related to Zoom were registered since the beginning of 2020, but 25% of this number was only registered in the past week. From these domains, 4% have been found with suspicious characteristics.

Other communication apps such as Google Classroom have been targeted as well; the official domain classroom.google.com has already been spoofed as googloclassroom\[.]com and googieclassroom\[.]com.

The researchers were also able to detect malicious files containing the word “Zoom,” such as “zoom-us-zoom_##########.exe” (# representing various digits). A file related to Microsoft Teams platform (“Microsoft-teams_V#mu#D_##########.exe”) was found as well. Running these files installs InstallCore PUA on the user’s computer, which could allow other parties to install malware.

In addition to malicious domains and files, the public is also warned of Zoom bombing, or strangers crashing private video conference calls to perform disruptive acts such as sharing obscene images and videos or using profane language. Attackers guess random meeting ID numbers in an attempt to join these calls. Companies and schools, holding online classes, have fallen victim to this. Zoom has released recommendations on how to prevent uninvited participants from joining in on private calls.

Zooming in on work-from-home set up security

The transition of many companies to a work-from-home (WFH) arrangement has brought about its own set of security concerns. For one, the increased reliance of companies on video conferencing apps for communication can inadvertently expose businesses to threats and even possibly leak classified company information.

Employees are advised to properly configure the settings of these apps to ensure that only those invited can participate in the call. Users are also advised to double-check domains that may look related to video conferencing apps and verify the source before downloading files. Official domains and related downloads are usually listed in the apps’ official websites.

Besides securing the use of video conferencing apps, users can also protect their WFH setups through the proper use and configuration of a virtual private network (VPN) and remote desktop protocol (RDP), which are commonly used for remote connection. Choosing strong passwords and setting up two-factor authentication (2FA) will also help secure accounts. Users are also reminded to be wary of online scams, including those that use content related to COVID-19 to lure possible victims.

Source: https://www.trendmicro.com/vinfo/us/security/news/cybercrime-and-digital-threats/malicious-domains-and-files-related-to-zoom-increase-zoom-bombing-on-the-rise?_ga=2.129671180.1627239902.1586142226-889185152.1585619978