AI Risks & Rewards: 2023 Global Privacy Summit Key Takeaways

5 min read
AI Risks & Rewards: 2023 Global Privacy Summit Key Takeaways

Earlier this year while attending the International Association of Privacy Professionals (IAPP) Global Privacy Summit I had a strange feeling of being suspended at an inflection point in history. Perhaps it’s because I’m a new father and my daughter is almost exactly as old as ChatGPT’s open beta, but I really think we have entered a new era, just as we entered the era of the Internet or electricity. We’ve entered the era of generative Artificial Intelligence: AI that can generate new content based on a variety of inputs.

While I attended conference sessions on AI, I was simultaneously reading news of the latest technological advances in AI, giving me the strange sensation that the events at the conference and the events in AI were happening in response to each other in real time. Ever since a friend told me about ChatGPT a day or two after its release last November, I’ve been wrestling with the good and the bad, the risks and the opportunities, the excitement and the fear of ChatGPT and LLMs. I’m trying to remain grounded amid what seem to be very rapidly changing circumstances.

Rapidly unlocking doors

I have used generative AI tools extensively in order to understand them and appreciate them, for example creating silly art and text, posing tough logic tests and riddles, setting up conversations with long-dead philosophers based upon their writings, learning to program in Python, and building apps on my PC and iPhone. All these doors are being unlocked, which is immensely empowering and exciting, but I sometimes come away from these experiences exhausted and haunted by the extremely dangerous misuses that I know are possible, and the sense that we have entered a phase of technological development where, on a large scale, we are not sure what capabilities we have unleashed or how they were even possible. For example, how is it that probabilities of word relationships on a massive scale led to a language model capable of logically deducing facts and translating to every known language? I still haven’t found an answer to that question.

As AI technology continues to advance at a rapid pace, concerns about its implications have risen to the forefront of discussions among policymakers, technologists, and the public. OpenAI’s ChatGPT, for example, has been met with regulatory scrutiny across the globe, prompting investigations into its potential impact on various aspects of society. For example, during the IAPP Global Privacy Summit 2023, the Office of the Privacy Commissioner of Canada announced its investigation into ChatGPT[1], highlighting the need for regulation and oversight in this rapidly evolving field​.[2] More recently the OPC has begun funding research on the impacts of AI on .[3]

The emergence of multi-modal AI models that can operate with text, audio, and video adds another layer of complexity to the conversation. These models, capable of understanding both text and images, exhibit human-level performance on various professional and academic benchmarks. The astonishing capabilities of AI have given rise to both awe and apprehension, as researchers and businesses harness the technology for applications ranging from language processing to image recognition.

Microsoft has contributed JARVIS to GitHub[4], a multi-modal tool that permits ChatGPT 4 to act as controller on complex tasks with the capability to call up a multitude of “expert” models that specialize in video, audio, physical, and other skills. In real time, this GitHub repository grew in popularity and hundreds of forks emerged as people realized its potential. All sorts of complex use cases are made possible by this kind of AI collaboration now that ChatGPT has created the glue to translate between them.

Here are some amazing examples of what is possible by integrating AIs with multiple interface modes, such as JARVIS:

  • Autonomous robotics control
  • Visual sign language interpretation
  • Entire films complete with audio, based upon sketch inspiration
  • Architectural analysis and design based upon blueprints, video, photographs, and text
  • Health care diagnosis assistance through analysis of charts, notes, and images
  • Incredibly engaging non-player character experiences in games
  • Generation of VR environments for training purposes in response to live feedback

Here are some things from the realm of science fiction that might now be within reach:

  • Brain-computer interfaces (has already happened, to some !)
  • Mind-reading (because brainwaves are just another language to be interpreted and can be through multiple modes of sensory experience)
  • Biological genetic design (because AI could analyze genomes as well as relationship to health conditions and physical characteristics)
  • Vast advances in theoretical physics (by understanding charts and equations as another form of language)
  • Robots who can use language, logic, and multiple senses to act autonomously in the physical world
  • Robotic limbs and sensory organs to bring sight to the blind, and mobility to the paralyzed

Cause for celebration & concern

While AI’s potential to automate and reason in an almost human fashion has been celebrated by some, it has also raised concerns about job displacement, with widespread concern among people who generate audio, visual, and written content. Goldman Sachs estimates that 300 million jobs may be in some way automated by the latest wave of AI.[5]  During the opening general session, generative AI expert Nina Schick said she believes that by 2025, 90% of all content will be AI-generated. The ethical considerations of AI deployment, particularly with respect to privacy, bias, and transparency, have also come into focus.

Amidst reports of Microsoft laying off its AI Ethics group[6] and of other groups participating in AI experiments some describe as dangerous,[7] governments and regulatory bodies have taken steps to address the challenges posed by AI. Guidelines and proposals for AI regulation have been announced by entities such as the U.S. Federal Trade Commission[8] and the European Commission. The EU Commission, for example, has drafted an Artificial Intelligence Act that divides the use of AI into risk categories to protect citizens’ rights[9].

In my conversations with privacy professionals at the conference, it seemed increasingly urgent for stakeholders to engage in an open and constructive dialogue about the technology’s potential and pitfalls. By understanding AI’s capabilities, as well as its limitations, we can work collaboratively to shape a future that is both innovative and inclusive. It was clear from several sessions that this was not merely a privacy issue, or an ethics issue, or an intellectual property, security, or social issue, but a combination of many areas of concern.

Making AI equitable for all

For all of AI’s incredible capabilities, the things that make us valuable to ourselves and to each other as humans do not change. We cannot look to AI to provide those kinds of values to us, rather, we must imbue our creations with our ethics and humanity, and that is exactly why now, more than ever, we need the input of ethicists, lawyers, privacy professionals, security professionals, and more to oversee these creations.

Multidisciplinary AI professionals need to imbue all of these AI creations with considerations of transparency, fairness, accountability, human supervision, and privacy. It’s our responsibility to use these principles to fight bias, inaccuracy, risks to safety, and risks to privacy that concern society and humanity.

As AI continues to transform various industries and aspects of daily life, it’s essential for us to remain vigilant and proactive in ensuring that  the development and deployment of AI align with ethical principles and societal values. Through thoughtful regulation, collaboration, and public discourse, we can harness the power of AI to create a happier, more equitable future for all.

Are you digitally ready for AI-enhanced ERP?

Many businesses are eyeing the potential of artificial intelligence (AI) and machine learning (ML) to transform ERP. Your ability to reap the rewards of that potential may depend on how far along you are with digital transformation. Learn more: Are you digitally ready for AI-enhanced ERP?

 

 

[1] Office of the Privacy Commissioner of Canada. (2023). Announcement: OPC launches investigation into ChatGPT.
https://www.priv.gc.ca/en/opc-news/news-and-announcements/2023/an_230404/

[2] IAPP GPS 2023: FTC’s Bedoya sheds light on generative AI regulation. International Association of Privacy Professionals. https://iapp.org/news/a/iapp-gps-2023-ftcs-bedoya-sheds-light-on-generative-ai-regulation

[3] https://www.priv.gc.ca/en/opc-news/news-and-announcements/2023/an_231023/

[4] https://github.com/microsoft/JARVIS

[5] https://www.cnn.com/2023/03/29/tech/chatgpt-ai-automation-jobs-impact-intl-hnk/index.html

[6] Microsoft lays off AI ethics and society team – The Verge. URL: https://www.theverge.com/2023/3/13/23638823/microsoft-ethics-society-team-responsible-ai-layoffs

[7] Are Fast-Paced AI Developments Dangerous? AI Experts Think So. Nasdaq. URL: https://www.nasdaq.com/articles/are-fast-paced-ai-developments-dangerous-ai-experts-think-so

[8] https://www.foley.com/en/insights/publications/2023/03/ftc-issues-guidance-ai-powered-products

[9] https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence