A2P 10DLC Background and New Regulations
05 Dec

A2P 10DLC Background and New Regulations

What is A2P?

Application to Person (A2P) messaging is SMS/MMS traffic in which a person is receiving messages from an application rather than another individual.

US telecom carriers consider any messages sent from an SMS App or Web Service number (or any other messaging provider) to be application to person. Learn more in this glossary article about A2P.

Traffic sent from an individual person to another person is called Person to Person (P2P) traffic.

What is 10DLC?

10DLC stands for 10 Digit Long Code. A 10DLC phone number contains 10 digits, and is also called a local phone number. When you are buying a US phone number from SMS App or Web Service, 10DLC numbers have “Local” as their Type.

You might also hear 10DLC numbers called 10DLC “routes”.

You can also use Toll-Free numbers and short codes to send messages from SMS App or Web Service to people in the US.

Why was A2P 10DLC created?

10 digit long code numbers in the US were originally designed for person to person (P2P) communication. These routes were unregulated, and in recent years have started seeing abuse from spam applications and unsolicited messaging.

Due to the increase in spam messages, many consumers have lost trust in SMS as a form of communication. US A2P 10DLC is the standard that carriers have implemented in the US to regulate this communication pathway.

A2P 10DLC improves the end-user experience by making sure that people can opt in and out of messaging and also know who is sending them messages. It also benefits businesses, offering them higher messaging throughput, brand awareness, and accountability.

Who needs to register for A2P 10DLC?

Anyone sending SMS/MMS messages over a 10DLC number from an application to the US must register for A2P 10DLC.

Carriers consider all SMS traffic from SMS App or Web Service to be sent from an application, so anyone using a 10DLC number with SMS App or Web Service to send SMS messages to the US will need to register. This includes individuals and hobbyists using SMS App or Web Services requiring SMS transactional notifications.

Toll-Free numbers and shortcode numbers are not part of the A2P 10DLC system and can also be used for messaging end-users in the United States.

If you are only using 10DLC numbers to send user verification text messages, you can use SMS App or Web Service Verify rather than registering for A2P 10DLC.

Registering for A2P 10DLC results in lower message filtering and higher messaging throughput. Additionally, customers who send messages from an SMS App or Web Service 10DLC number but do not register will receive additional carrier fees for sending unregistered traffic.You can register for A2P 10DLC within the SMS App or Web Service Console. If you are an ISV who is registering your customers for A2P 10DLC, you can also use SMS App or Web Service’s APIs.

General A2P 10DLC registration steps

US A2P 10DLC has been put in place to ensure that all A2P 10DLC traffic to US phone numbers is verified and consensual. To meet this goal, there are two main components of A2P 10DLC registration:

  • Create a Brand: You provide information about who is sending these messages so that carriers know you are a legitimate sender
  • Create a Campaign: You provide information about how end-users can opt-in, opt-out, and receive help. It also involves providing a description of the purpose of your messages.

You can create a Brand and Campaign either in the SMS App or Web Service Console or via the SMS App or Web Service API, depending on what type of customer you are. The section below shows the different customer types.

Determine your customer type

Below are the different A2P 10DLC customer types with SMS App or Web Service:

Direct Brand: You’re a business owner that uses SMS App or Web Service messaging services to send and receive SMS to/from your customers. You have a business Tax ID (not including a US Social Security Number).

Independent Software Vendor (ISV): You’re a software company that embeds SMS App or Web Service APIs into your software solutions to power digital communications for your customers.

Sole Proprietor: You’re a student, hobbyist, someone working at an organization or someone trying out SMS App or Web Service messaging products for the first time.

Determine your Brand type

Review the chart below to determine the best Brand type for you or your customers.

To determine which A2P Brand is best for you, you should primarily consider your current and expected messaging traffic volume.

Campaigns per Brand: One Campaign per Brand. Each Brand may register up to five Campaigns, unless a clear and valid business reason is provided for exceeding this limit. Each Brand may register up to five Campaigns, unless a clear and valid business reason is provided for exceeding this limit.

Daily message volume: 1,000 SMS segments and MMS per day to T-Mobile (approximately 3,000 SMS segments and MMS per day across US carriers). Up to 2,000 SMS segments and MMS per day to T-Mobile (approximately 6,000 SMS segments and MMS per day across US carriers), with the exception of companies in the Russell 3000 Index, who will be able to send 200,000 SMS segments and MMS per day to T-Mobile. From 2,000 and up to unlimited SMS segments and MMS per day to T-Mobile, depending on your Trust Score.

Note that if you are registering a company that is part of the Russell 3000 Index, you can unlock additional volume and throughput with a Low Volume Standard Registration.

Brands per Tax ID

Standard and Low-Volume Standard Brands require a Tax ID (e.g. EIN in U.S., Canadian Business Number in Canada, equivalent in other countries). Each tax ID may be used to register up to five Standard /Low Volume Standard Brands. If you exceed this limit, additional Brands will be successfully registered, but their Campaigns may be rejected upon the manual Campaign Vetting unless a clear and valid business reason is presented

Determine your Campaign use case type

Your Campaign use case type describes the general type of messages you will be sending to end-users, such as marketing or account verification. There are a few different categories of use case types:

  • Standard. See the full list of standard use cases.
  • Low-Volume Mixed. This Campaign use case type offers lower messaging volume (fewer than 2,000 message segments per day on T-Mobile) and throughput with a lower monthly fee.
  • Special, such as non-profits and emergency services. See the full list of special use cases.

The different campaign types have varying monthly fees associated and messaging throughput associated with them. See a list of A2P Campaign type fees in this support article.

Note that Low Volume Standard Brands receive lower messaging throughput for campaigns than Standard Brands.

Register for US A2P 10DLC with SMS App or Web Service

When you have determined your customer type, desired brand type, and campaign use case type, you are ready to start the registration process. You will need to provide SMS App or Web Service with additional details about your business and your campaign when you register as well.

Get help with A2P 10DLC

Need help building or registering your A2P 10DLC application? Contact our developers today to learn more about SMS App or Web Services for A2P 10DLC.

What is SSO and how single sign-on works?
28 Dec

What is SSO and how single sign-on works?

What is single sign-on (SSO)?

Single sign-on (SSO) is a technology which combines several different application login screens into one. With SSO, a user only has to enter their login credentials (username, password, etc.) one time on a single page to access all of their SaaS applications.

Imagine if customers who had already been admitted to a bar were asked to show their identification card to prove their age each time they attempted to purchase additional alcoholic beverages. Some customers would quickly become frustrated with the continual checks and might even attempt to circumvent these measures by sneaking in their own beverages.

SSO is an important aspect of many identity and access management (IAM) or access control solutions. User identity verification is crucial for knowing which permissions each user should have. Cloudflare Zero Trust is one example of an access control solution that integrates with SSO solutions for managing users’ identities.

However, most establishments will only check a customer’s identification once, and then serve the customer several drinks over the course of an evening. This is somewhat like an SSO system: instead of establishing their identity over and over, a user establishes their identity once and can then access several different services.

What are the advantages of SSO?

In addition to being much simpler and more convenient for users, SSO is widely considered to be more secure. Here are a few reasons:

  1. No repeated passwords: When users have to remember passwords for several different apps and services, a condition known as “password fatigue” is likely to set in: users will re-use passwords across services. Using the same password across several services is a huge security risk because it means that all services are only as secure as the service with the weakest password protection: if that service’s password database is compromised, attackers can use the password to hack all of the user’s other services as well. SSO eliminates this scenario by reducing all logins down to one login.
  2. Multi-factor authentication: Multi-factor authentication, or MFA, refers to the use of more than one identity factor to authenticate a user. For example, in addition to entering a username and password, a user might have to connect a USB device or enter a code that appears on their smartphone. Possession of this physical object is a second “factor” that establishes the user is who they say they are. MFA is much more secure than relying on a password alone. SSO makes it possible to activate MFA at a single point instead of having to activate it for three, four, or several dozen apps, which may not be feasible.
  3. Stronger passwords: Since users only have to use one password, SSO makes it easier for them to create, remember, and use stronger passwords.* In practice, this is typically the case: most users do use stronger passwords with SSO.*What makes a password “strong”? A strong password is not easily guessed and is random enough that a brute force attack is not likely to succeed. w7:g”5h$G@ is a fairly strong password; password123 is not.
  4. Single point for enforcing password re-entry: Administrators can enforce re-entering credentials after a certain amount of time to make sure that the same user is still active on the signed-in device. With SSO, they have a central place from which to do this for all internal apps, instead of having to enforce it across multiple different apps, which some apps may not support.
  5. Better password policy enforcement: With one place for password entry, SSO provides a way for IT teams to easily enforce password security rules. For example, some companies require users to reset their passwords periodically. With SSO, password resets are easier to implement: instead of constant password resets across a number of different apps and services, users only have one password to reset. (While the value of regular password resets has been called into question, some IT teams still consider them an important part of their security strategy.)
  6. Less time wasted on password recovery: In addition to the above security benefits, SSO also cuts down on wasted time for internal teams. IT has to spend less time on helping users recover or reset their passwords for dozens of apps, and users spend less time signing into various apps to perform their jobs. This has the potential to increase business productivity.
  7. SSO Internal credential management instead of external storage: Usually, user passwords are stored remotely in an unmanaged fashion by applications and services that may or may not follow best security practices. With SSO, however, they are stored internally in an environment that an IT team has more control over.

How does an SSO login work?

Whenever a user signs in to an SSO service, the service creates an authentication token that remembers that the user is verified. An authentication token is a piece of digital information stored either in the user’s browser or within the SSO service’s servers, like a temporary ID card issued to the user. Any app the user accesses will check with the SSO service. The SSO service passes the user’s authentication token to the app and the user is allowed in. If, however, the user has not yet signed in, they will be prompted to do so through the SSO service.

Think of SSO as a go-between that can confirm whether a user’s login credentials match with their identity in the database, without managing the database themselves — somewhat like when a librarian looks up a book on someone else’s behalf based on the title of the book. The librarian does not have the entire library card catalog memorized, but they can access it easily.

An SSO service does not necessarily remember who a user is, since it does not store user identities. Most SSO services work by checking user credentials against a separate identity management service.

How do SSO authentication tokens work?

The ability to pass an authentication token to external apps and services is crucial in the SSO process. This is what enables identity verification to take place separately from other cloud services, making SSO possible.

Just as each stamp has to look the same, authentication tokens have their own communication standards to ensure that they are correct and legitimate. The main authentication token standard is called SAML (Security Assertion Markup Language). Similar to how webpages are written in HTML (Hypertext Markup Language), authentication tokens are written in SAML.

Think of an exclusive event that only a few people are allowed into. One way to indicate that the guards at the entrance to the event have checked and approved a guest is to stamp each guest’s hand. Event staff can check the stamps of every guest to make sure they are allowed to be there. However, not just any stamp will do; event staff will know the exact shape and color of the stamp used by the guards at the entrance.

How does SSO fit into an access management strategy?

SSO is only one aspect of managing user access. It must be combined with access control, permission control, activity logs, and other measures for tracking and controlling user behavior within an organization’s internal systems. SSO is a crucial element of access management, however. If a system does not know who a user is, there is no way to allow or restrict that user’s actions.

Does Websiteflix integrate with SSO solutions?

Among many options in today’s market, we can help you implement Cloudflare Zero Trust controls and secures user access to applications and websites; it can act as a replacement for most VPNs. Cloudflare integrates with SSO providers in order to identify users and enforce their assigned access permissions.

Need help with implementing SSO solutions?

Give us a call today at (855) 225-4535, or fill out our contact form, and talk to one our cyber-safety experts.

Source: https://www.cloudflare.com/learning/access-management/what-is-sso/

Cybercrime is increasing and here are the effective ways to protect yourself
05 Oct

Cybercrime is increasing and here are the effective ways to protect yourself

At Websiteflix, we offer many security tools to safeguard your assets. Here are a few extra steps you can tahttps://domains.netlittle.com/products/website-securityke to significantly reduce the risk of a scammer targeting your accounts during National Cybersecurity Awareness Month and beyond:

  1. Enroll in two-factor authentication (2FA) for your online accounts as well as your email and mobile service provider accounts. 2FA acts as an extra hurdle for scammers, even if they learn your username and password.
  2. Enroll in biometric security for your mobile device, where possible. A biometric login (fingerprint or facial recognition) is far more secure than a username and password on its own. It’s also a faster way to log in.
  3. Always use unique usernames and passwords for each of your accounts. Scammers purchase compromised login details from the hidden web and test them on various websites to find people who reuse their credentials for multiple accounts. Don’t let them find you!
  4. Be aware of social engineering scams. The number one way cybercrimes begin is with a malicious email link, attached document, text message, or a spoofed or compromised web page. Be wary of anyone who claims to be from IT services, alleges that there’s a virus on your device, requests remote access to your computer, or asks for a password or a one-time PIN.
  5. Keep your contact information up to date in case there’s an issue and your provider needs to reach you. Double-check that your phone number and email are correct or make updates at your online accounts or in the mobile app,

Want to learn more about securing your accounts? Visit our security page for further information.

20 Jul

Customer Service

Thank you for contacting Websiteflix!

All inquiries received are handled within 48-72 hours.

If you need help with your domain names, website hosting, or email please reach out to the datacenter at (480) 624-2500 or by logging to your account at https://domains.netlittle.com/

For new hosting accounts please visit https://domains.netlittle.com/products/cpanel

For dedicated server support please visit https://domains.netlittle.com/products/dedicated-server



Google Reviews – Engage your customers with a Short URL
21 Jun

Google Reviews – Engage your customers with a Short URL

Would you like to learn the quick and easy way for your customers to leave a review on Google?

You can share a Google generated short URL with customers from your computer or on the mobile app. Through your short URL, customers can leave reviews and view your Business Profile.

Tip: You can create your URLs through the PlaceID Lookup Tool or from Google Search. However, we recommend that you use short URLs for customers.

Share your Google Reviews Short URL

From a Computer

  1. On your computer, sign in to Business Profile Manager. If you have multiple profiles, open the profile you want to manage.
  2. In the left menu, click Home.
  3. In the “Get more reviews” card, copy your short URL.

From a Mobile Device

  1. On your mobile device, open the Google My Business app .
    If you have multiple profiles, open the profile you want to manage.
  2. Tap Customers  Reviews.
  3. In the top right, tap Share .
  4. Copy your short URL.

When customers click your link, they can rate your business and leave a review. Learn more on how to read and reply to customer reviews.

It’s against Google review policies to solicit reviews from customers through incentives or review stations located at your place of business. Reviews that violate these policies might be removed.

Do you need help with your Google Reviews Short URL?

Contact us at 855-225-4535 or click here to get in touch with our certified SEO specialists.

Google I/O 2022: What to expect from the next developer conference
28 Mar

Google I/O 2022: What to expect from the next developer conference

Google’s annual developer conference is nearly upon us once more – but what could the tech giant be hiding up its sleeve this year?

Every year Google hosts a showcase of its newest software features named Google I/O (meaning Input/Output). The primary purpose is for developers to get to grips with any changes to the likes of the Android smartphone operating system, or the Wear operating system for smartwatches.

However, some previous editions of the event have included hardware launches too. Read on for all that we know so far about the 2022 event.

When is Google I/O 2022?

Google I/O 2022 will be held on Wednesday 11th to Thursday 12th May this year. This is typical for the event, which has taken place in May or June every year since its inception in 2008.

How can I watch THE Google I/O 2022?

Unlike last year’s edition, which was fully virtual, there will be a physical presence for Google I/O 2022 at the Shoreline Amphitheatre in the San Francisco Bay Area.

A limited live audience will be permitted to attend in person, but the rest of us will be able to watch a livestream of the event, which will be completely free and open to everyone.

As soon as the link is live, we will publish it on this page.

What can we expect to see at the Google I/O 2022?

Firstly, we’re certainly going to see a whole lot more of Android 13, the next smartphone operating system. A developer preview has already been launched, with new privacy and security features at the core of this update, and we expect to see more features announced at Google I/O on top of that.

Wear, Google’s software for smartwatches that is supported by the likes of the Samsung Galaxy Watch 4, may also receive some attention; it was Centre stage last year, but this year could see some more tweaks made to the platform.

On top of the software goodies, there could even be a hardware launch as well; the upcoming Google Pixel 6a is likely to make its debut at the event, as an affordable model in Google’s excellent range of smartphones.

Though there had long been rumors of a Pixel Watch being released at some point, the hype seems to have died down a bit for now so we’re not necessarily expecting to see it unveiled any time soon.

Google also announced that it was working to improve the accuracy of skin tones in its cameras (which was later demonstrated on the Pixel 6 series), that you could store digital car keys on your phone.

Click here to register for the Google I/O 2022

Five Years of VR: A Look at the Greatest Moments from Oculus
03 Jan

Five Years of VR: A Look at the Greatest Moments from Oculus

From Meta Editorial,

We released Oculus Rift in March 2016. It was a big moment — the launch of the first consumer VR headset of the modern era. And it was just the start. We’ve released five headsets in the past five years, each driving the technology forward and enabling major improvements in how people interact with VR and the experiences developers can offer.

We wanted to take a moment to celebrate the achievements of so many who have contributed to making VR what it’s today.

Oculus Rift – March 2016

Before Rift, there was the Kickstarter — a crowdfunding campaign to raise money for DK1. By 2014, Facebook had acquired Oculus with a vision of VR as the next computing platform.

Rift was the first step toward that vision, a $599 fabric-covered headset with flip-down headphones, an external sensor, a remote and an Xbox controller.

Caitlin Kalinowski – Head of VR Hardware: “I give credit to Peter [Bristol]’s team. They took a ugly prototype, Crescent Bay and figured out how to package it into a beautiful and elegant piece of consumer electronics. The fabric application, figuring out how to integrate audio into the straps… I think it established what consumer VR would be. Almost all VR since then has been derivative of Rift CV1 in some way. It showed people that VR could be a consumer product.”

Oculus Touch Launch – December 2016

Touch enabled players to essentially bring their hands into the virtual world. That ended up being a key turning point for VR, with games like Robo RecallSUPERHOT VRArizona SunshineThe Unspoken and The Gallery. Exploring the possibilities of this new control scheme and paving the way for Lone EchoBeat SaberAsgard’s Wrath and countless others. By the following summer, Rift and Touch were permanently bundled together.

Peter Bristol – Head of Industrial Design for FRL: “We probably made hundreds of models — just simple sticks with clay, crumpled paper, sanded foams, et cetera — trying to figure out how to get the ergonomics to work. The idea of the controllers becoming your hands set the trajectory of the entire program. We were trying to get your hands into this natural pose while holding the controllers so that your virtual hands matched your real hands. We also developed movements like the grip trigger to be similar to motions you use in the real world, so it would feel intuitive to use.”

Oculus Go – May 2018

Oculus Go reimagined the media-centric Gear VR as an all-in-one device — our first — with better lenses and longer battery life, a sleek new strap-based audio solution, higher-resolution screens and a mainstream-friendly $199 price point.

Matt Dickman – TPM, Health and Safety: “Go was the first time we really thought intentionally about accessibility and what that might mean in VR. You look at the attachment points inside the headset. Those were designed to hold the fabric interface originally — but you could repurpose those mounts to accommodate prescription lens inserts. That sort of thinking around comfort and ergonomics takes a long time (and internal design support), but it led to the accessories you see on Quest 2 today.”

Oculus Quest and Rift S – May 2019

In May 2019, we released two headsets on the same day: Rift S and Quest. Each sported a state-of-the-art inside-out tracking solution (Oculus Insight), higher resolution panels and a $399 price point. Rift S — a refinement of our previous PC efforts and Quest — a groundbreaking all-in-one device that offered a relatively comparable experience without wires or any additional hardware. Making VR more accessible to more people and helping developers reach new, larger audiences in the process.

By the end of 2019, Oculus Link allowed players to connect Quest to a PC for a best-of-both-worlds experience. And in 2019, the addition of an innovative hand tracking solution gave Quest users a glimpse of a more natural and intuitive VR future.

Atman Binstock – Chief Architect of Oculus VR: “I always believed that long-term, standalone VR would be the path forward. The question was more of when. My personal belief was that something like Rift and Touch was the experience we’d need to deliver on standalone. Not the same level of performance as a PC, but the self-presence, interaction, and social presence. And VR’s competing for people’s time with TVs, with laptops. So making the product easier to use, making “time to fun” lower—that’s huge.”

Oculus Quest 2 – October 2020

Our goal with Quest 2 was to give both players and developers a higher-powered and more customizable device — and do it for $100 less. That goal grew to be more difficult when the COVID-19 pandemic hit, but we managed to ship Quest 2 in October and are already so proud of its success and the success it’s brought developers.

Rangaprabhu Parthasarathy – Product Manager, Quest / Quest 2: “When we started Quest 2, we looked at all the things we did on the original Quest and CV1 and said what do we need in a mass market device? Affordable. Easy to use. Think about the setup process. Think about accessibility. Make it available in more places, make it friendly.”

The Next Five Years

What’s next? Michael Abrash has been making predictions about VR’s future since the first Oculus Connect, so we asked him. You can read his full response in our full oral history, but the short answer is:

Michael Abrash – Chief Scientist, Facebook Reality Labs: “We are at the very beginning. All this innovation, all this invention still has to happen with VR. Early VR rode on the back of other work that had been done. The cameras were cellphone cameras and the optics were basically off-the-shelf optics initially. Going from this point forward, we’re the ones who are developing it—and that’s exciting. It’s good. But it is also really, really challenging on the innovation front. People should realize that we’ve come a long way and we’ve done a great job — but this road stretches out for the rest of their lifetimes.”

We’re grateful to everyone who’s helped make VR real over the last five-plus years, including everyone who’s bought a headset or even shared a headset with their friends and family. Here’s to five more years and beyond.

To read the full oral history visit: tech.fb.com/five-years-of-vr-an-oral-history-from-oculus-rift-to-quest-2/

Source: https://about.fb.com/news/2021/03/five-years-of-vr-a-look-at-the-greatest-moments-from-oculus/

Windows 11 available on October 5
03 Sep

Windows 11 available on October 5

Today, we are thrilled to announce Windows 11 will start to become available on October 5, 2021. On this day, the free upgrade to Windows 11 will begin rolling out to eligible Windows 10 PCs and PCs that come pre-loaded with Windows 11 will start to become available for purchase. A new Windows experience, Windows 11 is designed to bring you closer to what you love.

As the PC continues to play a more central role in our lives than ever before — Windows 11 is ready to empower your productivity and inspire your creativity.

Here are 11 highlights of this release

  1. The new design and sounds are modern, fresh, clean and beautiful, bringing you a sense of calm and ease.
  2. With Start, we’ve put you and your content at the center. Start utilizes the power of the cloud and Microsoft 365 to show you your recent files no matter what device you were viewing them on.
  3. Snap Layouts, Snap Groups and Desktops provide an even more powerful way to multitask and optimize your screen real estate.
  4. Chat from Microsoft Teams integrated into the taskbar provides a faster way to connect to the people you care about.
  5. Widgets, a new personalized feed powered by AI, provides a faster way to access the information you care about, and with Microsoft Edge’s world class performance, speed and productivity features you can get more done on the web.
  6. Windows 11 delivers the best Windows ever for gaming and unlocks the full potential of your system’s hardware with technology like DirectX12 Ultimate, DirectStorage and Auto HDR. With Xbox Game Pass for PC or Ultimate you get access to over 100 high-quality PC games to play on Windows 11 for one low monthly price. (Xbox Game Pass sold separately.)
  7. Windows 11 comes with a new Microsoft Store rebuilt with an all-new design making it easier to search and discover your favorite apps, games, shows, and movies in one trusted location. We look forward to continuing our journey to bring Android apps to Windows 11 and the Microsoft Store through our collaboration with Amazon and Intel; this will start with a preview for Windows Insiders over the coming months.
  8. Windows 11 is the most inclusively designed version of Windows with new accessibility improvements that were built for and by people with disabilities.
  9. Windows 11 unlocks new opportunities for developers and creators. We are opening the Store to allow more developers and independent software vendors (ISVs) to bring their apps to the Store, improving native and web app development with new developer tools, and making it easier for you to refresh the look and feel across all our app designs and experiences.
  10. Windows 11 is optimized for speed, efficiency and improved experiences with touch, digital pen and voice input.
  11. Windows 11 is the operating system for hybrid work, delivering new experiences that work how you work, are secure by design, and easy and familiar for IT to deploy and manage. Businesses can also test Windows 11 in preview today in Azure Virtual Desktop, or at general availability by experiencing Windows 11 in the new Windows 365.

Thank you to the Windows Insider Community

The Windows Insider community has been an invaluable community in helping us get to where we are today. Since the first Insider Preview Build was released in June, the engagement and feedback has been unprecedented. The team has also enjoyed sharing more behind the scenes stories on the development of Windows 11 in a new series we launched in June, Inside Windows 11. We sincerely appreciate the energy and enthusiasm from this community.

Rolling out the free upgrade to Windows 11 in a phased and measured approach

The free upgrade to Windows 11 starts on October 5 and will be phased and measured with a focus on quality. Following the tremendous learnings from Windows 10, we want to make sure we’re providing you with the best possible experience. That means new eligible devices will be offered the upgrade first. The upgrade will then roll out over time to in-market devices based on intelligence models that consider hardware eligibility, reliability metrics, age of device and other factors that impact the upgrade experience. We expect all eligible devices to be offered the free upgrade to Windows 11 by mid-2022. If you have a Windows 10 PC that’s eligible for the upgrade, Windows Update will let you know when it’s available. You can also check to see if Windows 11 is ready for your device by going to Settings > Windows Update and select Check for updates*.

Ready to elevate to 11? There’s never been a better time to purchase a new PC

October 5 is right around the corner — and there are a few things you can do to get ready for Windows 11. First, if you’re in need of a new PC now — don’t wait. You can get all the power and performance of a new Windows 10 PC and upgrade to Windows 11 for free after the rollout begins on October 5**.

We’ve worked closely with our OEM and retail partners to bring you powerful Windows 10 PCs today, that will take you into the future with Windows 11. Here are a few to check out.

Source: https://blogs.windows.com/windowsexperience/2021/08/31/windows-11-available-on-october-5/

Google’s Core Web Vitals to Become Ranking Signals
02 Jun

Google’s Core Web Vitals to Become Ranking Signals

Google announces an upcoming change to search rankings that will incorporate Core Web Vitals as a ranking signal.

Search signals for page experience

“The page experience signal measures aspects of how users perceive the experience of interacting with a web page. Optimizing for these factors makes the web more delightful for users across all web browsers and surfaces, and helps sites evolve towards user expectations on mobile.”

Google is introducing a new ranking signal, which combines Core Web Vitals with existing user experience signals, to improve the way it evaluates the overall experience provided by a page.

This new ranking signal is in the early stages of development is not scheduled to launch until at least next year.

To help site owners prepare, Google has provided an early look at the work being done so far.

The New ‘Page Experience’ Signal

The upcoming ranking signal will be known as the page experience signal.

The page experience signal consists of the Core Web Vitals, as well as these existing page experience metrics:

  • Mobile-friendliness
  • Safe-browsing
  • HTTPS-security
  • Intrusive interstitial guidelines

Core Web Vitals

Core Web Vitals, introduced earlier this month, are a set of metrics related to speed, responsiveness, and visual stability.

Google has defined these as the Core Web Vitals:

  • Largest Contentful Paint: The time it takes for a page’s main content to load. An ideal LCP measurement is 2.5 seconds or faster.
  • First Input Delay: The time it takes for a page to become interactive. An ideal measurement is less than 100 seconds.
  • Cumulative Layout Shift: The amount of unexpected layout shift of visual page content. An ideal measurement is less than 0.1.

This set of metrics was designed to help site owners measure the user experience they’re providing when it comes to loading, interactivity, and visual stability.

Core Web Vitals are not set in stone – which means they may change from year to year depending on what users expect out of a good web page experience.

For now, the Core Web Vitals are what is listed above. Google will certainly update the public if and when these metrics change.

For more details about Core Web Vitals, see our full report from when they were first introduced.

Page Experience Signal & Ranking

By adding Core Web Vitals as ranking factors, and combining them with other user experience signals, Google aims to help more site owners build pages that users enjoy visiting.

If Google determines that a page is providing a high-quality user experience, based on its page experience signal, then it will likely rank the page higher in search results.

However, content relevance is still considered important when it comes to rankings. A page with content that’s highly relevant to a query could conceivably rank well even if it had a poor page experience signal.

The opposite is also true, as Google states:

“A good page experience doesn’t override having great, relevant content. However, in cases where there are multiple pages that have similar content, page experience becomes much more important for visibility in Search.”

As Google mentions, a page experience signal is a tie-breaker of sorts. Meaning if there are two pages both providing excellent content, the one with the stronger page experience signal will rank higher in search results.

So don’t get so hung up on optimizing for page experience that the actual content on the page starts to suffer. Great content can, in theory, outrank a great page experience.

Evaluating Page Experience

As of yet, there is no specific tool for evaluating page experience as a whole.

Although it is possible to measure the individual components that go into creating the page experience signal.

Measuring Core Web Vitals

When it comes to measuring Core Web Vitals, SEOs and site owners can use a variety of Google’s own tools such as:

  • Search Console
  • PageSpeed Insights
  • Lighthouse
  • Chrome DevTools
  • Chrome UX report
  • And more

Soon a plugin for the Chrome browser will also be available to quickly evaluate the Core Web Vitals of any page you’re looking at. Google is also working with third-parties to bring Core Web Vitals to other tools.

Measuring other user experience signals

Here’s how SEOs and site owners can measure the other type of user experience signals:

  • Mobile-friendliness: Use Google’s mobile-friendly test.
  • Safe-browsing: Check the Security Issues report in Search Console for any issues with safe browsing.
  • HTTPS: If a page is served over a secure HTTPS connection then it will display a lock icon in the browser address bar.
  • Intrusive interstitial guidelines: This one is a bit trickier. Contact Us to what counts as an intrusive interstitial.

When Will These Changes Happen?

There is no need to take immediate action, Google says, as these changes will not happen before next year.

Google will provide at least 6 months’ notice before they are rolled out.

The company is simply giving site owners a heads up in an effort to keep people informed about ranking changes as early as possible.

Source: https://www.searchenginejournal.com/googles-core-web-vitals-ranking-signal/370719/

Source: https://webmasters.googleblog.com/2020/05/evaluating-page-experience.html

Microsoft announces new supercomputer, lays out vision for future AI work
19 May

Microsoft announces new supercomputer, lays out vision for future AI work

Microsoft has built one of the top five publicly disclosed supercomputers in the world, making new infrastructure available in Azure to train extremely large artificial intelligence models, the company is announcing at its Build developers conference.

Built-in collaboration with and exclusively for OpenAI, the supercomputer hosted in Azure was designed specifically to train that company’s AI models. It represents a key milestone in a partnership announced last year to jointly create new supercomputing technologies in Azure.

It’s also a first step toward making the next generation of very large AI models and the infrastructure needed to train them available as a platform for other organizations and developers to build upon.

“The exciting thing about these models is the breadth of things they’re going to enable,” said Microsoft Chief Technical Officer Kevin Scott, who said the potential benefits extend far beyond narrow advances in one type of AI model.

“This is about being able to do a hundred exciting things in natural language processing at once and a hundred exciting things in computer vision, and when you start to see combinations of these perceptual domains, you’re going to have new applications that are hard to even imagine right now,” he said.

A new class of multitasking AI models

Machine learning experts have historically built separate, smaller AI models that use many labeled examples to learn a single task such as translating between languages, recognizing objects, reading text to identify key points in an email, or recognizing speech well enough to deliver today’s weather report when asked.

A new class of models developed by the AI research community has proven that some of those tasks can be performed better by a single massive model — one that learns from examining billions of pages of publicly available text, for example. This type of model can so deeply absorb the nuances of language, grammar, knowledge, concepts, and context that it can excel at multiple tasks: summarizing a lengthy speech, moderating content in live gaming chats, finding relevant passages across thousands of legal files or even generating code from scouring GitHub.

As part of a companywide AI at Scale initiative, Microsoft has developed its own family of large AI models, the Microsoft Turing models, which it has used to improve many different language understanding tasks across Bing, Office, Dynamics, and other productivity products.  Earlier this year, it also released to researchers the largest publicly available AI language model in the world, the Microsoft Turing model for natural language generation.

The goal, Microsoft says, is to make its large AI models, training optimization tools, and supercomputing resources available through Azure AI services and GitHub so developers, data scientists, and business customers can easily leverage the power of AI at Scale.

“By now most people intuitively understand how personal computers are a platform — you buy one and it’s not like everything the computer is ever going to do is built into the device when you pull it out of the box,” Scott said.

“That’s exactly what we mean when we say AI is becoming a platform,” he said. “This is about taking a very broad set of data and training a model that learns to do a general set of things and making that model available for millions of developers to go figure out how to do interesting and creative things with.”

Training massive AI models require advanced supercomputing infrastructure or clusters of state-of-the-art hardware connected by high-bandwidth networks. It also needs tools to train the models across these interconnected computers.

The supercomputer developed for OpenAI is a single system with more than 285,000 CPU cores, 10,000 GPUs, and 400 gigabits per second of network connectivity for each GPU server. Compared with other machines listed on the TOP500 supercomputers in the world, it ranks in the top five, Microsoft says. Hosted in Azure, the supercomputer also benefits from all the capabilities of robust modern cloud infrastructure, including rapid deployment, sustainable data centers, and access to Azure services.

“As we’ve learned more and more about what we need and the different limits of all the components that make up a supercomputer, we were really able to say, ‘If we could design our dream system, what would it look like?’” said OpenAI CEO Sam Altman. “And then Microsoft was able to build it.”

OpenAI’s goal is not just to pursue research breakthroughs but also to engineer and develop powerful AI technologies that other people can use, Altman said. The supercomputer developed in partnership with Microsoft was designed to accelerate that cycle.

“We are seeing that larger-scale systems are an important component in training more powerful models,” Altman said.

For customers who want to push their AI ambitions but who don’t require a dedicated supercomputer, Azure AI provides access to powerful computing with the same set of AI accelerators and networks that also power the supercomputer. Microsoft is also making available the tools to train large AI models on these clusters in a distributed and optimized way.

At its Build conference, Microsoft announced that it would soon begin open-sourcing its Microsoft Turing models, as well as recipes for training them in Azure Machine Learning. This will give developers access to the same family of powerful language models that the company has used to improve language understanding across its products.

It also unveiled a new version of DeepSpeed, an open-source deep-learning library for PyTorch that reduces the amount of computing power needed for large distributed model training. The update is significantly more efficient than the version released just three months ago and now allows people to train models more than 15 times larger and 10 times faster than they could without DeepSpeed on the same infrastructure.

Along with the DeepSpeed announcement, Microsoft announced it has added support for distributed training to the ONNX Runtime. The ONNX Runtime is an open-source library designed to enable models to be portable across hardware and operating systems. To date, the ONNX Runtime has focused on high-performance inferencing; today’s update adds support for model training, as well as adding the optimizations from the DeepSpeed library, which enable performance improvements of up to 17 times over the current ONNX Runtime.

“We want to be able to build these very advanced AI technologies that ultimately can be easily used by people to help them get their work done and accomplish their goals more quickly,” said Microsoft principal program manager Phil Waymouth. “These large models are going to be an enormous accelerant.”

Learning the nuances of language

Designing AI models that might one day understand the world more like people do starts with language, a critical component to understanding human intent, making sense of the vast amount of written knowledge in the world and communicating more effortlessly.

Neural network models that can process language, which are roughly inspired by our understanding of the human brain, aren’t new. But these deep learning models are now far more sophisticated than earlier versions and are rapidly escalating in size.

A year ago, the largest models had 1 billion parameters, each loosely equivalent to a synaptic connection in the brain. The Microsoft Turing model for natural language generation now stands as the world’s largest publicly available language AI model with 17 billion parameters.

This new class of models learns differently than supervised learning models that rely on meticulously labeled human-generated data to teach an AI system to recognize a cat or determine whether the answer to a question makes sense.

In what’s known as “self-supervised” learning, these AI models can learn about language by examining billions of pages of publicly available documents on the internet — Wikipedia entries, self-published books, instruction manuals, history lessons, human resources guidelines. In something like a giant game of Mad Libs, words or sentences are removed, and the model has to predict the missing pieces based on the words around it.

As the model does this billions of times, it gets very good at perceiving how words relate to each other. This results in a rich understanding of grammar, concepts, contextual relationships and other building blocks of language. It also allows the same model to transfer lessons learned across many different language tasks, from document understanding to answering questions to creating conversational bots.

“This has enabled things that were seemingly impossible with smaller models,” said Luis Vargas, a Microsoft partner technical advisor who is spearheading the company’s AI at Scale initiative.

The improvements are somewhat like jumping from an elementary reading level to a more sophisticated and nuanced understanding of language. But it’s possible to improve accuracy even further by fine-tuning these large AI models on a more specific language task or exposing them to material that’s specific to a particular industry or company.

“Because every organization is going to have its own vocabulary, people can now easily fine-tune that model to give it a graduate degree in understanding business, healthcare or legal domains,” he said.

AI at Scale

One advantage to the next generation of large AI models is that they only need to be trained once with massive amounts of data and supercomputing resources. A company can take a “pre-trained” model and simply fine-tune for different tasks with much smaller datasets and resources.

The Microsoft Turing model for natural language understanding, for instance, has been used across the company to improve a wide range of product offerings over the last year. It has significantly advanced caption generation and question answering in Bing, improving answers to search questions in some markets by up to 125 percent.

In-Office, the same model has fueled advances in the smart find feature enabling easier searches in Word, the Key Insights feature that extracts important sentences to quickly locate key points in Word, and in Outlook’s Suggested replies feature that automatically generates possible responses to an email. Dynamics 365 Sales Insights also uses it to suggest actions to a seller based on interactions with customers.

Microsoft is also exploring large-scale AI models that can learn in a generalized way across text, images, and video. That could help with automatic captioning of images for accessibility in Office, for instance, or improve the ways people search Bing by understanding what’s inside images and videos.

To train its own models, Microsoft had to develop its own suite of techniques and optimization tools, many of which are now available in the DeepSpeed PyTorch library and ONNX Runtime. These allow people to train very large AI models across many computing clusters and also to squeeze more computing power from the hardware.

That requires partitioning a large AI model into its many layers and distributing those layers across different machines, a process called model parallelism. In a process called data parallelism, Microsoft’s optimization tools also split the huge amount of training data into batches that are used to train multiple instances of the model across the cluster, which are then periodically averaged to produce a single model.

The efficiencies that Microsoft researchers and engineers have achieved in this kind of distributed training will make using large-scale AI models much more resource-efficient and cost-effective for everyone, Microsoft says.

When you’re developing a cloud platform for general use, Scott said, it’s critical to have projects like the OpenAI supercomputing partnership and AI at Scale initiative pushing the cutting edge of performance.

He compares it to the automotive industry developing high-tech innovations for Formula 1 race cars that eventually find their way into the sedans and sport utility vehicles that people drive every day.

“By developing this leading-edge infrastructure for training large AI models, we’re making all of Azure better,” Scott said. “We’re building better computers, better-distributed systems, better networks, better datacenters. All of this makes the performance and cost and flexibility of the entire Azure cloud better.”

Source: https://blogs.microsoft.com/ai/openai-azure-supercomputer/