Google's AI review system blocks millions of malicious apps in 2024

In the previous year, 2.36 million app submissions were blocked from being added to the Play Store

Googles AI review system blocks millions of malicious apps in 2024
Google's AI review system blocks millions of malicious apps in 2024

Google took major actions against apps and developer accounts that violated its policies.

In the previous year, 2.36 million app submissions were blocked from being added to the Play Store due to being potential cybersecurity risks.

As per TechRadar, Google banned 158,000 developer accounts that were attempting to upload dangerous software like malware and spyware.

For this, Google used artificial intelligence (AI) to help identify and block harmful apps and developer accounts.

In 92% of cases where harmful apps or accounts were successfully identified, human reviewers worked with AI tools to make the final decision.

The company said in a statement, "Today, over 92% of our human reviews for harmful apps are AI-assisted, allowing us to take quicker and more accurate action to help prevent harmful apps from becoming available on Google Play.”

It added, “That's enabled us to stop more bad apps than ever from reaching users through the Play Store, protecting users from harmful or malicious apps before they can cause any damage."

Google detected malicious apps by checking the permissions they requested.

Apps that ask for more permission than necessary, such as access to data or features unrelated to their purpose are often suspicious.