Google’s Core Web Vitals to Become Ranking Signals
02 Jun

Google’s Core Web Vitals to Become Ranking Signals

Google announces an upcoming change to search rankings that will incorporate Core Web Vitals as a ranking signal.

Search signals for page experience

“The page experience signal measures aspects of how users perceive the experience of interacting with a web page. Optimizing for these factors makes the web more delightful for users across all web browsers and surfaces, and helps sites evolve towards user expectations on mobile.”

Google is introducing a new ranking signal, which combines Core Web Vitals with existing user experience signals, to improve the way it evaluates the overall experience provided by a page.

This new ranking signal is in the early stages of development is not scheduled to launch until at least next year.

To help site owners prepare, Google has provided an early look at the work being done so far.

The New ‘Page Experience’ Signal

The upcoming ranking signal will be known as the page experience signal.

The page experience signal consists of the Core Web Vitals, as well as these existing page experience metrics:

  • Mobile-friendliness
  • Safe-browsing
  • HTTPS-security
  • Intrusive interstitial guidelines

Core Web Vitals

Core Web Vitals, introduced earlier this month, are a set of metrics related to speed, responsiveness, and visual stability.

Google has defined these as the Core Web Vitals:

  • Largest Contentful Paint: The time it takes for a page’s main content to load. An ideal LCP measurement is 2.5 seconds or faster.
  • First Input Delay: The time it takes for a page to become interactive. An ideal measurement is less than 100 seconds.
  • Cumulative Layout Shift: The amount of unexpected layout shift of visual page content. An ideal measurement is less than 0.1.

This set of metrics was designed to help site owners measure the user experience they’re providing when it comes to loading, interactivity, and visual stability.

Core Web Vitals are not set in stone – which means they may change from year to year depending on what users expect out of a good web page experience.

For now, the Core Web Vitals are what is listed above. Google will certainly update the public if and when these metrics change.

For more details about Core Web Vitals, see our full report from when they were first introduced.

Page Experience Signal & Ranking

By adding Core Web Vitals as ranking factors, and combining them with other user experience signals, Google aims to help more site owners build pages that users enjoy visiting.

If Google determines that a page is providing a high-quality user experience, based on its page experience signal, then it will likely rank the page higher in search results.

However, content relevance is still considered important when it comes to rankings. A page with content that’s highly relevant to a query could conceivably rank well even if it had a poor page experience signal.

The opposite is also true, as Google states:

“A good page experience doesn’t override having great, relevant content. However, in cases where there are multiple pages that have similar content, page experience becomes much more important for visibility in Search.”

As Google mentions, a page experience signal is a tie-breaker of sorts. Meaning if there are two pages both providing excellent content, the one with the stronger page experience signal will rank higher in search results.

So don’t get so hung up on optimizing for page experience that the actual content on the page starts to suffer. Great content can, in theory, outrank a great page experience.

Evaluating Page Experience

As of yet, there is no specific tool for evaluating page experience as a whole.

Although it is possible to measure the individual components that go into creating the page experience signal.

Measuring Core Web Vitals

When it comes to measuring Core Web Vitals, SEOs and site owners can use a variety of Google’s own tools such as:

  • Search Console
  • PageSpeed Insights
  • Lighthouse
  • Chrome DevTools
  • Chrome UX report
  • And more

Soon a plugin for the Chrome browser will also be available to quickly evaluate the Core Web Vitals of any page you’re looking at. Google is also working with third-parties to bring Core Web Vitals to other tools.

Measuring other user experience signals

Here’s how SEOs and site owners can measure the other type of user experience signals:

  • Mobile-friendliness: Use Google’s mobile-friendly test.
  • Safe-browsing: Check the Security Issues report in Search Console for any issues with safe browsing.
  • HTTPS: If a page is served over a secure HTTPS connection then it will display a lock icon in the browser address bar.
  • Intrusive interstitial guidelines: This one is a bit trickier. Contact Us to what counts as an intrusive interstitial.

When Will These Changes Happen?

There is no need to take immediate action, Google says, as these changes will not happen before next year.

Google will provide at least 6 months’ notice before they are rolled out.

The company is simply giving site owners a heads up in an effort to keep people informed about ranking changes as early as possible.

Source: https://www.searchenginejournal.com/googles-core-web-vitals-ranking-signal/370719/

Source: https://webmasters.googleblog.com/2020/05/evaluating-page-experience.html

Microsoft announces new supercomputer, lays out vision for future AI work
19 May

Microsoft announces new supercomputer, lays out vision for future AI work

Microsoft has built one of the top five publicly disclosed supercomputers in the world, making new infrastructure available in Azure to train extremely large artificial intelligence models, the company is announcing at its Build developers conference.

Built-in collaboration with and exclusively for OpenAI, the supercomputer hosted in Azure was designed specifically to train that company’s AI models. It represents a key milestone in a partnership announced last year to jointly create new supercomputing technologies in Azure.

It’s also a first step toward making the next generation of very large AI models and the infrastructure needed to train them available as a platform for other organizations and developers to build upon.

“The exciting thing about these models is the breadth of things they’re going to enable,” said Microsoft Chief Technical Officer Kevin Scott, who said the potential benefits extend far beyond narrow advances in one type of AI model.

“This is about being able to do a hundred exciting things in natural language processing at once and a hundred exciting things in computer vision, and when you start to see combinations of these perceptual domains, you’re going to have new applications that are hard to even imagine right now,” he said.

A new class of multitasking AI models

Machine learning experts have historically built separate, smaller AI models that use many labeled examples to learn a single task such as translating between languages, recognizing objects, reading text to identify key points in an email, or recognizing speech well enough to deliver today’s weather report when asked.

A new class of models developed by the AI research community has proven that some of those tasks can be performed better by a single massive model — one that learns from examining billions of pages of publicly available text, for example. This type of model can so deeply absorb the nuances of language, grammar, knowledge, concepts, and context that it can excel at multiple tasks: summarizing a lengthy speech, moderating content in live gaming chats, finding relevant passages across thousands of legal files or even generating code from scouring GitHub.

As part of a companywide AI at Scale initiative, Microsoft has developed its own family of large AI models, the Microsoft Turing models, which it has used to improve many different language understanding tasks across Bing, Office, Dynamics, and other productivity products.  Earlier this year, it also released to researchers the largest publicly available AI language model in the world, the Microsoft Turing model for natural language generation.

The goal, Microsoft says, is to make its large AI models, training optimization tools, and supercomputing resources available through Azure AI services and GitHub so developers, data scientists, and business customers can easily leverage the power of AI at Scale.

“By now most people intuitively understand how personal computers are a platform — you buy one and it’s not like everything the computer is ever going to do is built into the device when you pull it out of the box,” Scott said.

“That’s exactly what we mean when we say AI is becoming a platform,” he said. “This is about taking a very broad set of data and training a model that learns to do a general set of things and making that model available for millions of developers to go figure out how to do interesting and creative things with.”

Training massive AI models require advanced supercomputing infrastructure or clusters of state-of-the-art hardware connected by high-bandwidth networks. It also needs tools to train the models across these interconnected computers.

The supercomputer developed for OpenAI is a single system with more than 285,000 CPU cores, 10,000 GPUs, and 400 gigabits per second of network connectivity for each GPU server. Compared with other machines listed on the TOP500 supercomputers in the world, it ranks in the top five, Microsoft says. Hosted in Azure, the supercomputer also benefits from all the capabilities of robust modern cloud infrastructure, including rapid deployment, sustainable data centers, and access to Azure services.

“As we’ve learned more and more about what we need and the different limits of all the components that make up a supercomputer, we were really able to say, ‘If we could design our dream system, what would it look like?’” said OpenAI CEO Sam Altman. “And then Microsoft was able to build it.”

OpenAI’s goal is not just to pursue research breakthroughs but also to engineer and develop powerful AI technologies that other people can use, Altman said. The supercomputer developed in partnership with Microsoft was designed to accelerate that cycle.

“We are seeing that larger-scale systems are an important component in training more powerful models,” Altman said.

For customers who want to push their AI ambitions but who don’t require a dedicated supercomputer, Azure AI provides access to powerful computing with the same set of AI accelerators and networks that also power the supercomputer. Microsoft is also making available the tools to train large AI models on these clusters in a distributed and optimized way.

At its Build conference, Microsoft announced that it would soon begin open-sourcing its Microsoft Turing models, as well as recipes for training them in Azure Machine Learning. This will give developers access to the same family of powerful language models that the company has used to improve language understanding across its products.

It also unveiled a new version of DeepSpeed, an open-source deep-learning library for PyTorch that reduces the amount of computing power needed for large distributed model training. The update is significantly more efficient than the version released just three months ago and now allows people to train models more than 15 times larger and 10 times faster than they could without DeepSpeed on the same infrastructure.

Along with the DeepSpeed announcement, Microsoft announced it has added support for distributed training to the ONNX Runtime. The ONNX Runtime is an open-source library designed to enable models to be portable across hardware and operating systems. To date, the ONNX Runtime has focused on high-performance inferencing; today’s update adds support for model training, as well as adding the optimizations from the DeepSpeed library, which enable performance improvements of up to 17 times over the current ONNX Runtime.

“We want to be able to build these very advanced AI technologies that ultimately can be easily used by people to help them get their work done and accomplish their goals more quickly,” said Microsoft principal program manager Phil Waymouth. “These large models are going to be an enormous accelerant.”

Learning the nuances of language

Designing AI models that might one day understand the world more like people do starts with language, a critical component to understanding human intent, making sense of the vast amount of written knowledge in the world and communicating more effortlessly.

Neural network models that can process language, which are roughly inspired by our understanding of the human brain, aren’t new. But these deep learning models are now far more sophisticated than earlier versions and are rapidly escalating in size.

A year ago, the largest models had 1 billion parameters, each loosely equivalent to a synaptic connection in the brain. The Microsoft Turing model for natural language generation now stands as the world’s largest publicly available language AI model with 17 billion parameters.

This new class of models learns differently than supervised learning models that rely on meticulously labeled human-generated data to teach an AI system to recognize a cat or determine whether the answer to a question makes sense.

In what’s known as “self-supervised” learning, these AI models can learn about language by examining billions of pages of publicly available documents on the internet — Wikipedia entries, self-published books, instruction manuals, history lessons, human resources guidelines. In something like a giant game of Mad Libs, words or sentences are removed, and the model has to predict the missing pieces based on the words around it.

As the model does this billions of times, it gets very good at perceiving how words relate to each other. This results in a rich understanding of grammar, concepts, contextual relationships and other building blocks of language. It also allows the same model to transfer lessons learned across many different language tasks, from document understanding to answering questions to creating conversational bots.

“This has enabled things that were seemingly impossible with smaller models,” said Luis Vargas, a Microsoft partner technical advisor who is spearheading the company’s AI at Scale initiative.

The improvements are somewhat like jumping from an elementary reading level to a more sophisticated and nuanced understanding of language. But it’s possible to improve accuracy even further by fine-tuning these large AI models on a more specific language task or exposing them to material that’s specific to a particular industry or company.

“Because every organization is going to have its own vocabulary, people can now easily fine-tune that model to give it a graduate degree in understanding business, healthcare or legal domains,” he said.

AI at Scale

One advantage to the next generation of large AI models is that they only need to be trained once with massive amounts of data and supercomputing resources. A company can take a “pre-trained” model and simply fine-tune for different tasks with much smaller datasets and resources.

The Microsoft Turing model for natural language understanding, for instance, has been used across the company to improve a wide range of product offerings over the last year. It has significantly advanced caption generation and question answering in Bing, improving answers to search questions in some markets by up to 125 percent.

In-Office, the same model has fueled advances in the smart find feature enabling easier searches in Word, the Key Insights feature that extracts important sentences to quickly locate key points in Word, and in Outlook’s Suggested replies feature that automatically generates possible responses to an email. Dynamics 365 Sales Insights also uses it to suggest actions to a seller based on interactions with customers.

Microsoft is also exploring large-scale AI models that can learn in a generalized way across text, images, and video. That could help with automatic captioning of images for accessibility in Office, for instance, or improve the ways people search Bing by understanding what’s inside images and videos.

To train its own models, Microsoft had to develop its own suite of techniques and optimization tools, many of which are now available in the DeepSpeed PyTorch library and ONNX Runtime. These allow people to train very large AI models across many computing clusters and also to squeeze more computing power from the hardware.

That requires partitioning a large AI model into its many layers and distributing those layers across different machines, a process called model parallelism. In a process called data parallelism, Microsoft’s optimization tools also split the huge amount of training data into batches that are used to train multiple instances of the model across the cluster, which are then periodically averaged to produce a single model.

The efficiencies that Microsoft researchers and engineers have achieved in this kind of distributed training will make using large-scale AI models much more resource-efficient and cost-effective for everyone, Microsoft says.

When you’re developing a cloud platform for general use, Scott said, it’s critical to have projects like the OpenAI supercomputing partnership and AI at Scale initiative pushing the cutting edge of performance.

He compares it to the automotive industry developing high-tech innovations for Formula 1 race cars that eventually find their way into the sedans and sport utility vehicles that people drive every day.

“By developing this leading-edge infrastructure for training large AI models, we’re making all of Azure better,” Scott said. “We’re building better computers, better-distributed systems, better networks, better datacenters. All of this makes the performance and cost and flexibility of the entire Azure cloud better.”

Source: https://blogs.microsoft.com/ai/openai-azure-supercomputer/

Best Practices for Effective Video Conferencing
17 May

Best Practices for Effective Video Conferencing

To make your video conferencing meetings more productive and rewarding for everyone, review the general video conferencing best practices, and learn how to improve the experience whether you are an onsite participant or a remote participant.

Video conferencing best practices

Follow these tips to ensure a more successful video conferencing meeting.

Prior to a meeting:

  • When using equipment or locations not regularly used, test your meeting connections in advance.
  • When possible, establish online video conferencing connections several minutes before the meeting start time.
  • Create a backup communication plan in case you have trouble connecting with remote participants. A backup plan can include asking onsite participants to connect to the meeting through their laptops, using a mobile or speakerphone, and/or collaborating through an online collaboration tool (e.g., Google docs).

During a meeting:

  • Have all participants share their video and audio. No lurkers.
    • Ensure all participants can see and hear all other participants, as appropriate.
    • Ensure conference room microphones are distributed appropriately to pick up all speakers.
    • Ensure location lighting does not limit a participant’s visibility (e.g., avoid backlighting from windows or lamps).
  • Have participants mute their microphones if their location has excessive background noise or they will not be speaking.
  • Have a meeting facilitator — often, but not always, the person who called the meeting.

The facilitator is responsible for:

  • Providing an agenda to participants — ahead of the meeting is nice, but minimally at the start of the meeting — that includes an overview of topics to be covered and planned outcome;
  • Establishing the visual or verbal cues, such as raising a hand, to indicate when someone wants to actively contribute verbally to the meeting;
  • Engaging participants at all locations to ensure discussion understanding, and alignment;
  • Limiting “side conversations” and multitasking or ensure all participants are made aware of that content;
  • Make sure all participants have equal access to content by sharing all content within the video conferencing connection and using online tools (e.g., Google docs) whenever possible.

Tips to improve a video conferencing meeting if you are onsite

Follow these steps to connect an H.323 or SIP-based room system to a video conferencing meeting.

After you connect with the video conferencing software, you will see a splash screen and be prompted to enter your meeting ID.

Enter the meeting ID that is listed on your meeting invitation email.

The video conferencing software then connects your room system to the meeting.

See the following Zoom video for tips on setting up a room for video conferencing.

Tips to improve a video conferencing meeting if you are remote

If you participate remotely in a video conference, follow these instructions to ensure the best experience.

  1. Try to connect via a wired Ethernet cable. This prevents WiFi dropouts and speed issues.
  2. If connecting from a laptop, plug in the laptop wall power. Battery use can adversely affect video quality.
  3. Test the connection before the call; this is strongly recommended.
    • If you use Zoom: Go to the Zoom site to test your audio connection or test your video connection.
    • If you use WebEx: Go to your WebEx Personal Room. Test your audio connection using the Audio pull-down menu. Test your video connection by viewing the screen in your Personal Room.
  4. Ensure that you have a camera, microphone, and headphones or speakers available. Earbuds or headphones are preferable to avoid audio feedback and echo. Most modern laptops and all-in-one desktops have a headphone jack, microphone, and speakers built-in.
  5. Be aware of your surroundings and how you appear visually.
    • Call from a quiet location with no background noise.
    • Close blinds on windows so that you are easier to see on the video.
    • Wear neutral, solid-colored clothing. Avoid black, white, or striped clothing.
  6. Be aware of your behavior. Because you are in a video conference, people can see what you are doing at all times.
  7. Follow all instructions in the video conferencing invitation and note important supplemental information, such as a backup phone number in case you are disconnected.

Source: https://uit.stanford.edu/videoconferencing/best-practices

Optimize Costs and Maximize Control with Private Cloud Computing
08 May

Optimize Costs and Maximize Control with Private Cloud Computing

A private cloud gives you always-on availability and scalability with a long-term cost advantage.

If you have strict requirements for data privacy or resource management or want to optimize costs over the long term, a private cloud hosted either in your data center or by a third-party provider is a smart option.

Business Advantages with the Private Cloud Computing

  • A private cloud can be hosted on infrastructure in your data center or by a third-party provider as a managed private cloud. Both options deliver services to users via the internet.
  • With a private cloud, you get more control over data and resources, support for custom applications that can’t be migrated to the public cloud, and a lower cost over the long term.

When choosing your best cloud deployment model, you’ll need to take into account your unique business needs—including desired CapEx and OpEx, the types of workloads you’ll be running, and your available IT resources.

Many organizations will need some amount of private cloud services. A private cloud is commonly hosted in your data center and maintained by your IT team, with services delivered to your users via the internet. It can also be hosted off-premises by a third-party provider as a managed private cloud.

Benefits of the private cloud

A private cloud gives you more control over how you use computing, storage, and networking. These always-on resources provide on-demand data availability, ensuring reliability and support for mission-critical workloads. You also get more control over security and privacy for data governance. This way, you can ensure compliance with any regulations, such as the European Union’s General Data Protection Regulation (GDPR).

Furthermore, a private cloud allows you to support internally developed applications, protect intellectual property, and support legacy applications that were not built for the public cloud.

It’s also the best path for optimizing your computing costs. Over the long term, running certain workloads on a private cloud can deliver a lower TCO as you deliver more computing power with less physical hardware. However, setting up and maintaining a private cloud on-premises requires a higher cost upfront as you purchase IT infrastructure.

Because private clouds give you both scalability and elasticity, you can respond quickly to changing workload demands. Your IT team can set up a self-service portal and spin up a virtual machine in minutes. They can also enable a single-tenant environment in which software can be customized to meet your organization’s needs.

Private cloud use cases

There are certain scenarios in which private infrastructure is best for hosting cloud services. While these use cases are most common among government, defense, scientific, and engineering organizations, they can also occur in any business, depending on the specific needs. In short, a private cloud is ideal for any use case in which you must do the following:

  • Protect sensitive information, including intellectual property
  • Meet data sovereignty or compliance requirements
  • Ensure high availability, as with mission-critical applications
  • Support internally developed or legacy applications

In some cases, you may want to set up a virtual private cloud, an on-demand pool of computing resources that provides isolation for approved users. This gives you an extra layer of control for privacy and security purposes.

A private cloud gives you more control over your data and resources, support for proprietary or legacy applications, and a better TCO over the long term.

Need help to determine what is the best cloud solution that suits your business?

Call us today at 855-225-4535 for a free consultation or click here

Source: https://www.intel.com/content/www/us/en/cloud-computing/what-is-private-cloud.html

Learn More – Cloud computing: A complete guide

Cloud computing: A complete guide
08 May

Cloud computing: A complete guide

Cloud computing is no longer something new — 94% of companies use it in some form. Cloud computing is today’s standard for competing effectively and speeding up your digital transformation.

What is cloud computing?

Cloud computing, sometimes referred to simply as “cloud,” is the use of computing resources — servers, database management, data storage, networking, software applications, and special capabilities such as blockchain and artificial intelligence (AI) — over the internet, as opposed to owning and operating those resources yourself, on premises.

Compared to traditional IT, cloud computing offers organizations a host of benefits: the cost-effectiveness of paying for only the resources you use; faster time to market for mission-critical applications and services; the ability to scale easily, affordably and — with the right cloud provider — globally; and much more (see “What are the benefits of cloud computing?” below). And many organizations are seeing additional benefits from combining public cloud services purchased from a cloud services provider with private cloud infrastructure they operate themselves to deliver sensitive applications or data to customers, partners and employees.

Increasingly, “cloud computing” is becoming synonymous with “computing.” For example, in a 2019 survey of nearly 800 companies, 94% were using some form of cloud computing (link resides outside WEBSITEFLIX). Many businesses are still in the first stages of their cloud journey, having migrated or deployed about 20% of their applications to the cloud, and are working out the unique security, compliance and geographic implications of moving their remaining mission-critical applications. But move they will: Industry analyst Gartner predicts that more than half of companies using cloud today will move to an all-cloud infrastructure by next year (2021) (link resides outside WEBSITEFLIX).

A brief history of cloud computing

Cloud computing dates back to the 1950s, and over the years, it has evolved through many phases that were first pioneered by IBM, including grid, utility, and on-demand computing.

What are the benefits of cloud computing?

Compared to traditional IT, cloud computing typically enables:

  • Greater cost-efficiency. While traditional IT requires you to purchase computing capacity in anticipation of growth or surges in traffic — a capacity that sits unused until you grow or traffic surges — cloud computing enables you to pay for only the capacity you need when you need it. Cloud also eliminates the ongoing expense of purchasing, housing, maintaining, and managing infrastructure on-premises.
  • Improved agility; faster time to market. On the cloud you can provision and deploy  (“spin up”)  a server in minutes; purchasing and deploying the same server on-premises might take weeks or months.
  • Greater scalability and elasticity. Cloud computing lets you scale workloads automatically — up or down — in response to business growth or surges in traffic. And working with a cloud provider that has data centers spread around the world enables you to scale up or down globally on demand, without sacrificing performance.
  • Improved reliability and business continuity. Because most cloud providers have redundancy built into their global networks, data backup and disaster recovery are typically much easier and less expensive to implement effectively in the cloud than on-premises. Providers who offer packaged disaster recovery solutions— referred to disaster recovery as a service, or DRaaS — make the process even easier, more affordable, and less disruptive.
  • Continually improving performance. The leading cloud service providers regularly update their infrastructure with the latest, highest-performing computing, storage, and networking hardware.
  • Better security, built-in. Traditionally, security concerns have been the leading obstacle for organizations considering cloud adoption. But in response to demand, the security offered by cloud service providers is steadily outstripping on-premises solutions. According to security software provider McAfee, today 52% of companies experience better security in the cloud than on-premises (link resides outside WEBSITEFLIX). Gartner has predicted that by this year (2020), infrastructure as a service (IaaS) cloud workloads will experience 60% fewer security incidents than those in traditional data centers (link resides outside WEBSITEFLIX).

With the right provider, the cloud also offers the added benefit of greater choice and flexibility. Specifically, a cloud provider that supports open standards and a hybrid multi-cloud implementation (see “Multicloud and Hybrid Multicloud” below) gives you the choice and flexibility to combine cloud and on-premises resources from unlimited vendors into a single, optimized, seamlessly integrated infrastructure you can manage from a single point of control — and infrastructure in which each workload runs in the best possible location based on its specific performance, security, regulatory compliance, and cost requirements.

Cloud computing storage

Storage growth continues at a significant rate, driven by new workloads like analytics, video, and mobile applications. While storage demand is increasing, most IT organizations are under continued pressure to lower the cost of their IT infrastructure through the use of shared cloud computing resources. It’s vital for software designers and solution architects to match the specific requirements of their workloads to the appropriate storage solution or, in many enterprise cases, a mix.

One of the biggest advantages of cloud storage is flexibility. A company that has your data or data you want will be able to manage, analyze, add to and transfer it all from a single dashboard — something impossible to do today on storage hardware that sits alone in a data center.

The other major benefit of storage software is that it can access and analyze any kind of data wherever it lives, no matter the hardware, platform, or format. So, from mobile devices linked to your bank to servers full of unstructured social media information, data can be understood via the cloud.

Learn more about cloud storage

The future of cloud

Within the next three years, 75 percent of existing non-cloud apps will move to the cloud. Today’s computing landscape shows companies not only adopting cloud but using more than one cloud environment. Even then, the cloud journey for many has only just begun, moving beyond low-end infrastructure as a service to establish higher business value.

Source: https://www.ibm.com/cloud/learn/cloud-computing

Learn More – Optimize Costs and Maximize Control with Private Cloud Computing

Identifying and Avoiding COVID-19 Scams
22 Apr

Identifying and Avoiding COVID-19 Scams

Are you working from home or attending school online during the Coronavirus (COVID-19) pandemic? Be cautious of cybercriminals.

During this time of social distancing, people spend more time on their phones and computers for home, work, shopping, and entertainment. Cybercriminals take advantage of widespread fear, panic, and worry. They may use your extra screen time and time at home as an opportunity.

Protect yourself by being aware of different types of scams.

According to the U.S. Department of Justice, the Federal Trade Commission (FTC) and the Federal Communications Commission (FCC), there are several ways scammers will use COVID-19 to target people.

  • Vaccine and treatment scams. Scammers may advertise fake cures, vaccines, and advice on unproven treatments for COVID-19.
  • Shopping Scams. Scammers may create fake stores, e-commerce websites, social media accounts, and email addresses claiming to sell medical supplies currently in high demand. Supplies might include things like hand sanitizer, toilet paper, and surgical masks. Scammers will keep your money but never provide you with the merchandise.
  • Medical scams. Scammers may call and email people pretending to be doctors and hospitals that have treated a friend or relative for COVID-19 and demand payment for treatment.
  • Charity scams. Scammers sometimes ask for donations for people and groups affected by COVID-19.
  • Phishing and Malware scams. During the COVID-19 crisis, phishing and malware scams may be used to gain access to your computer or to steal your credentials.
    • Malware is malicious software such as spyware, ransomware, or viruses that can gain access to your computer system without you knowing. Malware can be activated when you click on email attachments or install risky software.
    • When Phishing is used, bad actors send false communications from what appears to be a trustworthy source to try to convince you to share sensitive data such as passwords or credit card information.
    • For example, scammers may pose as national and global health authorities, including the World Health Organization (WHO) and the Centers for Disease Control and Prevention (CDC) and send phishing emails designed to trick you into downloading malware or providing your personal and financial information.
  •  App scamsScammers may create mobile apps designed to track the spread of COVID-19 and insert malware into that app, which will compromise users’ devices and personal information.
  • Investment scamsScammers may offer online promotions on things like social media, claiming that products or services of publicly traded companies can prevent, detect, or cure COVID-19, causing the stock of these companies to dramatically increase in value as a result.

(Source: U.S. Department of Justice)  

Malicious Domains and Files Related to Zoom Increase, ‘Zoom Bombing’ on the Rise
05 Apr

Malicious Domains and Files Related to Zoom Increase, ‘Zoom Bombing’ on the Rise

Threat actors take advantage of the increased usage of video conferencing apps is reflected in the rise of malicious domains and files related to Zoom application. Cases of “Zoom bombing” has been witnessed as well. The use of Zoom and other video conferencing platforms has increased since many companies have transitioned to a work-from-home setup due to the coronavirus (COVID-19) outbreak.

Registrations of domains that reference the name of Zoom has significantly increased, according to Check Point Research. More than 1,700 new domains related to Zoom were registered since the beginning of 2020, but 25% of this number was only registered in the past week. From these domains, 4% have been found with suspicious characteristics.

Other communication apps such as Google Classroom have been targeted as well; the official domain classroom.google.com has already been spoofed as googloclassroom\[.]com and googieclassroom\[.]com.

The researchers were also able to detect malicious files containing the word “Zoom,” such as “zoom-us-zoom_##########.exe” (# representing various digits). A file related to Microsoft Teams platform (“Microsoft-teams_V#mu#D_##########.exe”) was found as well. Running these files installs InstallCore PUA on the user’s computer, which could allow other parties to install malware.

In addition to malicious domains and files, the public is also warned of Zoom bombing, or strangers crashing private video conference calls to perform disruptive acts such as sharing obscene images and videos or using profane language. Attackers guess random meeting ID numbers in an attempt to join these calls. Companies and schools, holding online classes, have fallen victim to this. Zoom has released recommendations on how to prevent uninvited participants from joining in on private calls.

Zooming in on work-from-home set up security

The transition of many companies to a work-from-home (WFH) arrangement has brought about its own set of security concerns. For one, the increased reliance of companies on video conferencing apps for communication can inadvertently expose businesses to threats and even possibly leak classified company information.

Employees are advised to properly configure the settings of these apps to ensure that only those invited can participate in the call. Users are also advised to double-check domains that may look related to video conferencing apps and verify the source before downloading files. Official domains and related downloads are usually listed in the apps’ official websites.

Besides securing the use of video conferencing apps, users can also protect their WFH setups through the proper use and configuration of a virtual private network (VPN) and remote desktop protocol (RDP), which are commonly used for remote connection. Choosing strong passwords and setting up two-factor authentication (2FA) will also help secure accounts. Users are also reminded to be wary of online scams, including those that use content related to COVID-19 to lure possible victims.

Source: https://www.trendmicro.com/vinfo/us/security/news/cybercrime-and-digital-threats/malicious-domains-and-files-related-to-zoom-increase-zoom-bombing-on-the-rise?_ga=2.129671180.1627239902.1586142226-889185152.1585619978

Working From Home? 5 Tips to Stay Secure
19 Mar

Working From Home? 5 Tips to Stay Secure

Working from home – a new reality

It’s evident that working from home has become a new reality for many, as more and more companies are encouraging and even requesting that their staff work remotely. In fact, recent events have accelerated this WFH trend, or workforce transformation process, with companies restricting employee travel and many allocating more resources to enable virtual work. Major tech players, like Twitter and LinkedIn, have made even bigger moves by implementing policies that require all employees to work from home. Clearly, work from home is no longer just an initiative to harness global talent but also a way to protect workers from risk.

Increased security risks

At McAfee, we’re keeping a close eye on this trend, observing huge increases in the number of personal devices connecting online. And while working from home offers benefits to employees, this upswing in personal devices connecting to enterprises can actually expose organizations and employees to security risks, such as malware attacks, identity theft, and ransomware. With the world now facing this new reality, the question remains–how can employers and employees equip themselves with the resources to work from home securely on a full-time or part-time basis?

Work from home securely

Employers must not only educate their employees on digital security best practices but also give them the tools to combat online threats that may stem from remote work. With many of us relying on emails and the web to work remotely, we need to be aware of the key giveaway signs that indicate a threat. From there, we can spot, flag, and report anything that looks suspicious. By sharing the responsibility and encouraging others to flag anything sketchy, we can all naturally raise awareness and help others avoid falling into similar traps. By staying open with one another, we can stay ahead of hackers.

Tips to protect both personal and corporate data

Want to ensure you work from home in a safe and secure way? Here are five quick tips and tools you can use to protect both personal and corporate data:

Utilize a VPN

Many people use public Wi-Fi at coffee shops, airports, etc. in order to stay connected both professionally and personally. However, by using an unsecured Wi-Fi connection, you may be creating an easy gateway for hackers to access your personal information and data. Be sure to use a virtual private network (VPN), which is extremely important for establishing a secured connection to work files and personal photos saved in the cloud.

Be aware of phishing emails

We’ve seen hackers attempt to take advantage of people’s fears by pretending to sell face masks online to trick unsuspecting people into giving away their credit card details. Do not open any email attachments or click on any links that seem suspicious.

Regularly change cloud passwords with two-factor authentication

Two-factor authentication is a more secure way to access work applications. In addition to a password/username combo, you will be asked to verify who you are with a device that you–and only you—own, such as a mobile phone. Put simply: it uses two factors to confirm an identity. Ultimately, getting access to something supposedly confidential isn’t that hard for hackers nowadays. However, the second form of identification makes it so hackers are limited in what they can pull off.

Use strong, unique passwords

In the chance a hacker does gain access to one of your accounts, make sure to use complex passwords for each of your accounts, and never reuse your credentials across different platforms. It’s also a good idea to update your passwords consistently to further protect your data. You can also use a password manager, or a security solution that includes a password manager, to keep track of all your unique passwords.

Browse with security protection

Ensure that you continue to update your security solutions across all devices. This will help protect devices against malware, phishing attacks, and other threats, as well as help, identify malicious websites while browsing.

source: https://www.mcafee.com/blogs/consumer/top-five-things-to-do-to-stay-secure-when-working-from-home/

WOULD YOU LIKE HELP SETTING UP VPN OR IT SECURITY?

Coronavirus phishing emails: How to protect against COVID-19 scams
18 Mar

Coronavirus phishing emails: How to protect against COVID-19 scams

The overwhelming amount of news coverage surrounding the novel coronavirus has created a new danger — phishing attacks looking to exploit public fears about the sometimes-deadly virus.

How does it work? Cybercriminals send emails claiming to be from legitimate organizations with information about the coronavirus.

The email messages might ask you to open an attachment to see the latest statistics. If you click on the attachment or embedded link, you’re likely to download malicious software onto your device.

The malicious software — malware, for short — could allow cybercriminals to take control of your computer, log your keystrokes, or access your personal information and financial data, which could lead to identity theft.

The coronavirus — or COVID-19, the name of the respiratory disease it causes — has affected the lives of millions of people around the world. It’s impossible to predict its long-term impact. But it is possible to take steps to help protect yourself against coronavirus-related scams.

Here’s some information that can help.

How do I spot a coronavirus phishing email? Examples

Coronavirus-themed phishing emails can take different forms, including these.

CDC alerts. Cybercriminals have sent phishing emails designed to look like they’re from the U.S. Centers for Disease Control. The email might falsely claim to link to a list of coronavirus cases in your area. “You are immediately advised to go through the cases above for safety hazards,” the text of one phishing email reads.

Health advice emails. Phishers have sent emails that offer purported medical advice to help protect you against the coronavirus. The emails might claim to be from medical experts near Wuhan, China, where the coronavirus outbreak began. “This little measure can save you,” one phishing email says. “Use the link below to download Safety Measures.”

Workplace policy emails. Cybercriminals have targeted employees’ workplace email accounts. One phishing email begins, “All, Due to the coronavirus outbreak, [company name] is actively taking safety precautions by instituting a Communicable Disease Management Policy.” If you click on the fake company policy, you’ll download malicious software.

How do I avoid scammers and fake ads?

Scammers have posted ads that claim to offer treatment or cures for the coronavirus. The ads often try to create a sense of urgency — for instance, “Buy now, limited supply.”

At least two bad things could happen if you respond to the ads.

One, you might click on an ad and download malware onto your device.

Two, you might buy the product and receive something useless, or nothing at all. Meanwhile, you may have shared personal information such as your name, address, and credit card number.

Bottom line? It’s smart to avoid any ads seeking to capitalize on the coronavirus.

Tips for recognizing and avoiding phishing emails

Here are some ways to recognize and avoid coronavirus-themed phishing emails.

Like other types of phishing emails, the email messages usually try to lure you into clicking on a link or providing personal information that can be used to commit fraud or identity theft. Here’s some tips to avoid getting tricked.

  • Beware of online requests for personal information. A coronavirus-themed email that seeks personal information like your Social Security number or login information is a phishing scam. Legitimate government agencies won’t ask for that information. Never respond to the email with your personal data.
  • Check the email address or link. You can inspect a link by hovering your mouse button over the URL to see where it leads. Sometimes, it’s obvious the web address is not legitimate. But keep in mind phishers can create links that closely resemble legitimate addresses. Delete the email.
  • Watch for spelling and grammatical mistakes. If an email includes spelling, punctuation, and grammar errors, it’s likely a sign you’ve received a phishing email. Delete it.
  • Look for generic greetings. Phishing emails are unlikely to use your name. Greetings like “Dear sir or madam” signal an email is not legitimate.
  • Avoid emails that insist you act now. Phishing emails often try to create a sense of urgency or demand immediate action. The goal is to get you to click on a link and provide personal information — right now. Instead, delete the message.

Where can I find legitimate information about the coronavirus?

It’s smart to go directly to reliable sources for information about the coronavirus. That includes government offices and health care agencies.

Here are a few of the best places to find answers to your questions about the coronavirus.

Centers for Disease Control and Prevention. The CDC website includes the most current information about the coronavirus. Here’s a partial list of topics covered.

  • How the coronavirus spreads
  • Symptoms
  • Prevention and treatment
  • Cases in the U.S.
  • Global locations with COVID-19
  • Information for communities, schools, and businesses
  • Travel

World Health Organization. WHO provides a range of information, including how to protect yourself, travel advice, and answers to common questions.

National Institutes of Health. NIH provides updated information and guidance about the coronavirus. It includes information from other government organizations.

Source: https://us.norton.com/internetsecurity-online-scams-coronavirus-phishing-scams.html

Zebra Programming Language (ZPL II) and Raw Printing
25 Feb

Zebra Programming Language (ZPL II) and Raw Printing

Zebra Programming Language (ZPL) is the command language used by all ZPL compatible printers. It is a command based language used by the printer as instructions for creating the images printed on the labels. This document contains links to manuals, examples, and specific information related to specific ZPL commands.

Websiteflix has programmers who are familiar working with programming Zebra ZPL II and RAW languages.

We have extended experience with Raw commands, RFID printing and Zebra ZPL II, QR Code printing, Label Printing, Thermal Label Printing, Shipping Label printing and more.

Do you need help with your application? Call us today at 954-323-2004 we can assist you!

^XA
^LH30,6161
^FO20,10
^ADN,90,50
^FDWebsiteflix.com^FS
^XZ

Resources: https://www.zebra.com/us/en/support-downloads/knowledge-articles/zpl-command-information-and-details.html