Google is one of the largest tech firms leading the charge for more robust AI regulations. With an increasing integration of AI in almost all Android apps, the threat to users’ privacy and security has also increased. To combat this, from January 31, 2024, Google will require Android app developers to more effectively oversee generative AI to protect consumers.
Android apps must allow users to report scams or fraudulent apps, under PlayStore’s new policy. Google has recently updated its policy amid the increased exploitation of users using AI technology.
Some app permissions require further review by the Google Play team. It is an excellent technique for safeguarding the confidentiality of individuals. Google has expanded this criteria even further by including a new policy that restricts Android apps that can request wide photo and video permissions from users.
All the latest Android apps that offer AI functionalities like chatbots, image generators, voice producers, and more, will have to abide by the updated regulations. However, apps that use artificial intelligence for productive purposes will be exempted from the rules. Similarly, all the apps with AI-generated content are exempted from the regulation change.
The policy update issued today also establishes stricter guidelines for the usage of full-screen intent alerts, which share high-priority messages and need the user’s immediate attention. Google has imposed new restrictions and moved it to a specific app access permission to guarantee that this permission is limited to only high-priority use cases.
Only apps whose essential functionality requires a full-screen notification will be granted Full-Screen Intent permission by default for Android 14 and above, while all others will need to request permission. Limiting notifications in this manner contributes to a better user experience.
It’s peculiar to see Google be the first to release a policy on AI apps and chatbots, as generally, Apple has issued new restrictions to cut down on problematic app behavior, which Google subsequently replicates.
However, Apple does not yet have an official AI or chatbot policy in its App Store Guidelines, but it has tightened restrictions in other areas, such as apps asking data to identify the user or device, a process known as “fingerprinting,” as well as apps that attempt to duplicate others.
Google has rolled out its updated policies recently (know more). However, app developers are allowed to incorporate the changes by January 2024. It will be interesting to see how these new policies will define the structure of new AI apps.
Related Posts
How to Clear Google Search History?
AirGo Vision- Solos’ Smart Glasses with AI Integration from ChatGPT, Gemini, and Claude
Rise of deepfake technology. How is it impacting society?
OpenAI’s Critic GPT- The New Standard for GPT- 4 Evaluation and Improvement
Claude 3.5 Takes the Lead- Why It’s Better Than GPT-4
Smartphone Apps Get Smarter- Meta AI’s Integration Across Popular Platforms