7 minute read

Steven De Schrijver

Next Article
Christopher Raske

Christopher Raske

Astrea

Brussels www.astrealaw.be

Advertisement

sds@astrealaw.be Tel: +32 476 609 182

Biography

Steven De Schrijver is a partner with the Belgian law firm Astrea. He has almost 30 years of expertise in a wide range of IT and technology law matters. Steven has been involved in many outsourcing, digital transformation, video platform and data protection (now GDPR) compliance projects. He has a passion for artificial intelligence, robotics and drones. He is also considered to be one of Belgium’s top tech M&A lawyers.

What inspired you to pursue a career in law?

I always had a passion for writing. When I was 18 it was less evident I become a literary author than a lawyer.

What qualities make for a successful corporate lawyer?

Knowing the law, willingness to go the extra mile, keenness to learn, attention to detail and commercial awareness.

How has your work in data affected your approach to your corporate and media practice?

Apart from being an interesting field of law in itself to explore and advise on, data law is an important addition to my corporate and media practice. It makes one much more aware of how privacy and data protection law work across all other fields of law, including corporate law (e.g., when conducting due diligence within the framework of an M&A transaction) and media law (e.g., when advising on advertising rules for new media platforms or telecommunications providers).

How can clients best continue to innovate given the increasing complexity of data regulations in the US, EU, China and Brazil?

The technology industry will of course always be ahead of regulation when introducing very innovative products. Certain general rules will always remain applicable, but very specific legal questions that may arise in respect of the technology concerned may be unanswered for a certain time. Clients should keep themselves aware of proposed or upcoming regulation to amend their processes and products where required. While the final set of rules may be different from initial proposals, it is always good to bring a business in the direction in which the law is heading. For instance, while the draft of the AI regulation is still likely to change, developers of AI products can already prepare themselves for certain certification procedures or by implementing measures to comply with the future obligations for high-risk AI systems. Taking these steps now, perhaps in the planning phase of a new project, may save costs, as these can be higher when a process has to be redesigned entirely from the very beginning. This is also where technology lawyers who are closely following legal developments will be important to innovators: it is good to advise a client that their product is not yet regulated, but it is even better if you can tell your client (if possible) what you expect to be the law soon and what steps can be taken by the client to achieve compliance and perhaps gain a better position in the market than their competitors.

What are the biggest data threats your clients currently face, and how are you helping tackle them?

Malware, ransomware and phishing remain the global security threats that clients currently face and will have to continue to tackle. These have many implications for clients, ranging from preventive measures to reactive actions. Lawyers can assist in preparing internal procedures regarding security measures, data breach notification procedures and data processing. With the future entrance into force of the NIS 2.0 Directive, that will lead to a very broad scope of application across many sectors, more and more businesses will have to take important steps to ensure a high level of cybersecurity of their business. Clients may also review the possibility to enter into a cyber insurance contract. If things go wrong, clients must be assisted in assessing if a data breach must be notified to the competent authority, in eventual negotiations with cybercriminals and possible steps towards law enforcement.

What does the EU’s Artificial Intelligence Regulation draft do well, and what are its limitations?

The benefits of AI should not be overshadowed by its potential risks that could corrode European values and human rights. The EU therefore intends to regulate AI in a balanced way, as to not limit innovation more than necessary, by banning certain potentially harmful AI solutions such as a social credit scoring system, while subjecting high-risk categories – potentially creating an adverse impact on people’s safety or fundamental rights – to mandatory requirements and obligations, and encouraging self-imposed codes of conduct on others. It is good that this legal text will ensure that this innovative technology is made subject to certain important principles. The scope of these categories is however subject to criticism since the discrepancies between these demarcations are quite large.

An important topic is the interpretation of transparency. This overarching principle encompasses the need for subjects to know when they are confronted with AI, its modus operandi and goals (transparency sensu stricto), a clear explanation of the algorithm so that one can understand how a particular decision was made (explainability) and the possibility to sufficiently understand and use the output generated by AI (interpretability). Although each term has a different connotation, the overarching term of transparency is used throughout the text without further clarification and sometimes contradictorily.

The lack of clear liability rules in the AI regulation may pose a further criticism, especially when a product is based on algorithms from different developers. After all, the characteristic features of AI systems – such as autonomous learning and the unknown quality of data sets – challenge current schemes of allocating liability. Perhaps a solution will be found through an amendment of the Product Liability Directive, which is currently under review as it may not entirely be suitable to challenges posed by emerging technologies such as AI. For example, it is unclear whether unpredictable outcomes of an AI algorithm that lead to damage can be treated as “defects” under the Directive. Even if they can be, the “development risk defence” may exempt developers from liability for defects that were undiscoverable when the AI product was put into circulation, which would hinder injured parties in getting compensation. The current legal liability landscape needs to be revised in order to keep up with the pace at which AI technologies emerge.

It is clear that a lot of work has still to be done in respect of the proposed AI regulation. Is the definition of AI adequate or too broad? What is the regulation’s relation to the GDPR? And what about individual rights for persons that are made subject to an AI system?

AdTech is a sector coming under increasing scrutiny from regulators. How do you see this developing?

The introduction of new privacy laws such as the GDPR have already brought important changes to the AdTech industry. The future introduction of the ePrivacy Regulation will certainly further impact the industry’s use of cookies. Third-party cookies will be phased out to a large extent in the coming years due to decisions of Google and other browser providers. A recent decision by the Belgian DPA to fine IAB Europe, which facilitates the management of users’ preference for online personalised advertising, due to GDPR violations will further impact the industry. The sector will have to look at alternatives. AI may be an interesting solution, as may IoT with connected TVs and AR. Even the metaverse may be interesting with the AdTech sector. In any way, the industry will have to move to a privacycentric and consumer-focused approach, whereby data is collected directly from the consumer in accordance with all privacy laws. Another novelty is anonymous identifiers, which may make it possible to, following the user’s consent, tie an anonymous identifier to the user’s identity which can then be used for advertising. Consent-based approach may indeed prove the best way to be compliant with new regulations in multiple jurisdictions.

Looking back over your career, what has been your proudest achievement to date?

Being recognised by my peers as a thought leader in my field is certainly one of the proudest achievements in my career. Besides that, we as lawyers are highquality service providers and providing the best possible high-quality service regardless of the size or importance of the assignment remains the best possible achievement a lawyer can achieve each day.

Peers and clients say: “An extremely savvy lawyer, with a special knack for Tech M&A transactions” “He is an outstanding lawyer” “A very experienced tech lawyer who I would very quickly recommend to clients”

Global Data Review (GDR) is the trusted source of worldwide news and analysis on the use and trade of data. It keeps practitioners up to speed with the global issues and trends that matter, giving you the detail and depth you need to operate successfully.

Know more

GDR is an editorial-first product that keeps you informed with the latest news, research and opinion, so you can stay connected, advise with confidence and compete effectively.

Find the edge

Connect and collaborate with your peer group, clients and colleagues. Get the full cross-border view from leaders in the field, to help you identify opportunities and research your position.

Lead the market

Understand the trends and harness the data, tools and insight you need to guide strategy, streamline decision-making, and keep your practice in front.

This article is from: