I always feel like AI is watching me: artificial intelligence and privacy | Farella Braun + Martel LLP

ChatGPT hit the early press, and every day we hear about new Generative Artificial Intelligence products that can create new and creative visual and text responses to human input. Following the fame of ChatGPT, Google’s Bard and Microsoft’s Bing are now stepping into the limelight, but these are just a few of the hundreds, if not thousands, of generative artificial intelligence products currently available or in development – there’s no question that generative AI here is stay. In fact, social media and other platform companies — TikTok (using AI to create or add effects to images), Instacart (to create shopping lists and answer grocery questions), and Shopify (to generate product descriptions), to name a few call – already existing integration of AI in their services.

Among all the questions raised by this innovative technology, there are some critical questions about privacy. While only time will tell the extent of the privacy issues, some of the concerns are already clear.

The California Consumer Privacy Act (CCPA) gives individuals the right to understand and opt-out of automated decision-making technologies, including AI. Businesses need to monitor the reach of their AI tools to ensure they are respecting consumer choices. As with most privacy issues, it will be important for organizations to properly organize their data to enable this level of control. AI could merge the information of two people in a similar situation and incorrectly send materials/offers/etc. send. to the wrong person; If that person has not consented to such offers, it could constitute a breach of data protection law. Again, controlling and restricting data access will be critical. While anonymized data does not constitute personally identifiable information (PII) under the CCPA, AI could identify an individual based on otherwise anonymized characteristics. Studies have shown that even very coarse data can be recognized with over 95% accuracy using AI. Businesses can avoid targeted re-identification, but additional controls will be needed to prevent the AI ​​from doing this on its own. AI can infer additional PII from even basic information and can easily result in the organization having more PII than it has requested or been given permission for. AI could use data in a particular database for purposes other than those for which consent was given. Again, control over data and borders will be key to avoiding many of the privacy pitfalls. AI could collect data (and cause the company to collect data) from people who have not consented to such collection.

READ :  Genomic Testing Cooperative Presents Data at American Society of Hematology Meeting on Emerging Applications of Machine Learning

The key to avoiding such data breaches is one that most companies that collect consumer data at scale are already familiar with – the need for tight control over the data, including the ability to restrict access. Now more than ever, companies that collect personal data from their customers must practice good data hygiene to ensure that AI is not able to de-anonymize its customers’ and/or third parties’ data, use it without permission or without collect consent. In addition, allowing users to easily and easily control their data and transparently display the use of AI in connection with their data will lead to better consumer confidence. Finally, ensuring that AI receives adequate, broad, and comprehensive data when it “learns” helps ensure that the automated decisions and conclusions avoid bias.

It looks like we’re at a tipping point with AI – it’s not a question of if, but when your company will implement this exciting and powerful technology into its products and services. In doing so, organizations should consider the potential privacy pitfalls of the technology and take action from the outset to address, among other things, the associated privacy implications.