The data challenge
According to our survey respondents, Margrethe Vestager had the most impact in European tech this year. She came first by a long mile with 12% of respondents’ votes. In fact 40% who picked the EU competition commissioner were founders/employee of tech startups and 10% were part of the VC community. As such, our aim in this article is to identify opportunities for improved collaboration between policymakers and the European tech community. To do this, we used Politico Pro Intelligence to analyse the activity of the European Parliament (the legislative branch of the European Union) during the last term of the European Parliament (2014 to 2019) to better understand the policy conversation around a number of key topics for the tech community. The European Parliament, as the body that debates and approves policy legislation, is where we can see the 'end result' of the European policy agenda that is set by the European Commission. In other words, it's a useful proxy for actual policy outcomes that may have a nearer-term impact on the European tech ecosystem. If it were possible, we would extend the analysis to cover the European Commission, which creates and proposes the forward-looking policy agenda for the European Union, to be able to complement this analysis with a longer-term view on future policy focus.
As a business, we think globally – we don't think locally. We were born with the need to be international from the beginning because our users demand it. The youth of today are inherently global, and they seek fashion inspiration from all over the world. This has forced us to look at how our business can be more appealing to other markets from day one. We also believe that privacy is going to be a huge topic globally, and as our laws are more stringent in Europe – as a result of GDPR – we are in a better position to adapt to our users' expectations as we expand globally.
European tech continues to prosper, and it's very likely the next giant will have started here. There are two crucial elements to this growth. Critically, later-stage VC money is no longer confined to Silicon Valley, removing the pressure for tech firms to relocate for scale. That's had a massive impact on the likes of TransferWise, Monzo and N26, who have been able to grow, hire and innovate so much faster. For fintechs, European regulators have fostered an environment in which challengers can prosper on a level playing field with incumbents, e.g. the Bank of England opening up settlement accounts to non-banks. This forward-thinking approach is setting the standard for regulators all over the world, and is a huge advantage for the European market. A year ago, Brexit was my big concern in continuing this momentum. Assuming we lose the regulatory passporting rights the EU provides, how would the current high-growth firms handle the need to get regulated in multiple countries? Would TransferWise and others be able to continue to hire the talent we need to scale? Today it's clear the current crop of scale-ups will meet this challenge. Most, like TransferWise, have already taken the steps needed to mitigate all possible Brexit outcomes. My next concern is how we help the next generation of startups also thrive, so that London in particular continues to be attractive as a HQ to grow a business.
Owkin is playing an important role in increasing collaboration between academic, biopharma and healthcare institutions by championing a new class of technology called Federated Learning. FedAI allows researchers to collaborate and train predictive models on the decentralised data within disparate institutions, to reveal insights on mechanism of action, or drivers of disease progression, while entirely safeguarding patient privacy by sending the models to the data and never removing data from the hospital firewalls.
More could be done by governments to support collaboration. In America, for example, hospitals have a standard contract form called a business association agreement (BAA) which standardises how third parties access anonymised patient data. It is a rigorous but standard process. European information governance rules are strong on privacy protection, but there are few contractual standards in place, which makes it expensive and time-consuming to form bespoke partnerships with every single institution. However, I am confident that these issues are being worked on, and that Europe is moving in the right direction for both attracting and retaining great health tech talent.
The healthcare platform that connects with patients around the world.
The disruptive consumer electronics innovator that makes and sells a new kind of device.
The delivery app that relies on gig workers.
The AI pioneer who utilises facial recognition technology.
What do they all have in common? They are exciting tech company models that also present growing exposure to human rights concerns.
Human rights may not be the first topic you associate with the state of the European tech ecosystem. But the reputational, financial and legal hazards once associated primarily with the mistreatment of physical labourers have moved into the digital world. The rise of AI, big data, gig workers, facial recognition technology and 5G has given rise to a new set of human rights issues. And governments are turning their attention to the human rights impact of the adoption and use of technology.
A valuable conversation is emerging about 'responsible technology' — preventing, addressing and remediating the negative impacts of technology on human rights, and ensuring its ethical design, deployment and use. And Europe is in the driver's seat on 'responsible AI'. In April 2019, the European Commission's High-Level Expert Group on AI presented the 'Ethics Guidelines for Trustworthy Artificial Intelligence', which are underpinned by international human rights law and identify seven key requirements for AI systems to be deemed trustworthy.
The Council of Europe Commissioner for Human Rights also published this year a 10-point recommendation on AI and human rights. The report recommends that member states establish procedures for conducting human rights impact assessments, among other things.
OECD also published its own Principles on AI this year, calling for AI systems to be designed in a way that respects human rights.
At the same time, more tech companies worldwide are opting in to the UN Guiding Principles on Business and Human Rights and joining multi-stakeholder initiatives aimed at promoting human rights and ethics in tech, like the Global Network Initiative, Partnership on AI and the World Economic Forum's ethical tech projects.
There's a lot to unpack here – and different schemes need to be reconciled. But, if you're at a company looking to get started on these issues – as both a moral imperative and a business and risk management matter – we suggest three key steps: (1) a human rights impact assessment, which will help you understand and prioritize areas of risk for your particular business model, (2) integration of human rights considerations into your existing compliance processes and policies, and (3) engagement with key stakeholders, including your board.
This is a growing area of focus for companies, investors and consumers, as this year's SOET report confirms.
Be on the forefront.