What AI Regulation means for Silicon Valley

Julia Reinhardt, Privacy and AI Governance Professional, on data policy, GDPR, and its impact on SMEs

Diksha Dutta
Ocean Protocol

--

In the latest episode of Voices of the Data Economy, we spoke to Julia Reinhardt. She is a San Francisco-based expert in Artificial Intelligence governance and privacy and public policy consultant. As a Mozilla Fellow in Residence, Julia assesses the opportunities and limitations of European approaches on trustworthy Artificial Intelligence in Silicon Valley and their potential for U.S. businesses and advocacy. During our conversation, Julia spoke about the different facets of GDPR's impact on Silicon Valley and the challenges of upcoming AI regulation. Here are edited excerpts from the podcast.

Impact of GDPR on Silicon valley

GDPR has had notable and immediate impacts worldwide. It has also brought awareness to Silicon Valley that privacy is important to people in Europe and other world regions. It’s a human right that in the US, many had not considered significant. The global conversation around privacy has shifted in the past three years since 2018 and so have the laws. As a direct result of GDPR in Europe, countries like Japan, Brazil, India, and China are in the process of passing GDPR inspired privacy laws. In addition, California has a new privacy law, which went into effect in 2020 due to GDPR.

GDPR has also shown Silicon Valley that one of their biggest markets, Europe has its own set of rules, and the US has to follow them to be a player there and earn money in the region. And as a result, many US-based organizations that process personal data of people worldwide have decided to apply GDPR and extend all the rights that go with it to their customers who don’t need to be European residents and live outside of Europe. It gives them an edge in global compliance, and it’s easier for them to handle complaints and requests. In addition, GDPR offers them a legal framework and a set of standards.

“I need to mention that a disappointing factor with the GDPR laws is the enforcement. Even when tech companies do get hit with billion-dollar fines, for them, it’s a tap on the wrist. And so far, GDPR hasn’t changed the underlying business models, the way money is made on the internet, and surveilling people’s behavior. So it’s not just the business model of a company; it is the economic model on which the entire internet is based — that doesn’t have privacy, top of mind. Changing that requires fundamental and probably painful adjustments to the way things have been structured. That’s something that GDPR so far hasn’t been able to achieve. And that is definitely a bit disappointing.”

AI regulations in Europe and their global challenges

Julia worked as a German diplomat for almost 15 years, managing bilateral relations, navigating crisis, communication, heading up high-level protocol, participating in EU negotiation processes, and promoting innovation and outreach in the Western US.

As part of her work today, she mentions that she intends to make sure that the upcoming AI regulation from Europe does not again cause that lagging behind for small players because in the field of AI — size matters. “The more data you can gather, the better your AI system works. We’re already pretty far down the road to monopolization because big players in the market have access to an impressive range of data. They can also afford to gather high-quality data, which then enables them to build better-performing AI. And for small-scale providers, what’s most important is the clarity of the guidance. The draft that the European Commission tabled also has been a very long time in the making. It’s the most ambitious and the most comprehensive attempt at reining the risks linked to the deployment of AI that we have seen so far across the globe. It’s a bold new step.” You can read an analysis of the AI regulation proposal here.

Now in 2021, we’re at the stage of transforming these principles into practical rules and regulations. The rules that the European Commission proposed wouldn’t cover all AI systems. For instance, they do cover systems deemed to pose a significant risk to the safety and fundamental rights of people living in Europe. It’s a risk-based approach, and it has several layers. And those layers have different rules for different classes of AI systems; some are prohibited, some are considered high risk, and some follow specific rules only. And then others where, you know, they just say you have to be more transparent.

Going deeper into regulations: Code testing for Algorithms

You have to know in which category your AI system falls. For some uses of AI, the commission proposes an outright ban and says it’s an unacceptable threat to citizens. For example, an AI system causes physical or psychological harm by manipulating people’s behavior or exploiting their vulnerable abilities, like age or disability. Other examples are social scoring systems where people collect points and facial recognition in public spaces by law enforcement authorities — not all facial recognition is banned but those used by police in public areas. Although, there are exceptions.

Most of the regulatory draft focuses on AI, which is considered high risk, and what is high risk is defined in the regulatory draft. So that’s kind of problematic uses in the recruiting field in the employment admissions context, determining a person’s creditworthiness, or eligibility for public services and benefits, and some applications used in law enforcement and security and judiciary. And for those, these systems have to meet different requirements and undergo a conformity assessment before they are in the European market.

To ensure that an AI system complies with several requirements around serious risk management, it has to use data sets in training, validation, and testing that are relevant, representative, free of errors, etc. Documentation about a high-risk AI system must be really extensive and very precise — why you chose certain designs? Why you designed it in a specific way? The keyword is always human oversight. So high-risk AI systems must be designed to allow people to understand the capabilities and limitations of a system and counter so-called automation bias. Also, if necessary, reverse or override the output. It’s like code testing for algorithms.

Loopholes in AI regulation: Not the final world

The European Parliament and other bodies in Europe have already called for much stricter rules in some draft elements. In addition, certain member states believe that it should be more stringent in some cases. However, this is not the final word.

“In my personal opinion, I do think that the exceptions such as in facial recognition are too wide. It’s just difficult when you ban very specific uses of facial recognition but then actually in industry or private uses — there’s no ban at all. Even in the uses of law enforcement, you have certain areas where it can be used. Practically speaking, law enforcement in Europe will buy facial recognition systems on the market, wherever they’re produced, and use them in those specific cases that there would be allowed to. How do you really want to make sure that they don’t use them for other things? I think that’s a huge loophole. I do think that facial recognition has the potential to actually undermine our free society. In the end, so there’s a lot to criticize about this draft.”

Here is a list of the selected time stamps on the different topics discussed during the podcast:

  • 2:16 — 6:56 Julia’s journey from being a German diplomat to now an advisor bin data policy and regulations in the US.
  • 6:56 — 12:48 GDPR’s impact on Silicon Valley
  • 15:55 — 20:38 Impact of GDPR on Big tech and SMEs in the U.S
  • 20:38 — 25:53 How do the proposed AI regulations impact the U.S?
  • 25:53 — 29:58 Detailed analysis of AI regulations
  • 29:58 — 35:55 Loopholes and challenges of the AI regulation?
  • 35:55 — End Does innovation and AI regulation go hand-in-hand?

Follow Ocean Protocol on Twitter, Telegram, LinkedIn, Reddit, GitHub & Newsletter for project updates and announcements. And chat directly with other developers on Discord.

--

--

Berlin-based Content Strategist ( B2B Tech). Business Journalist. I help in telling stories in #AI #tech #startups #data #dataprivacy.