Embracing the digital era: revolutionising the construction industry with people at the core
THE CONSTRUCTION INDUSTRY IS UNDERGOING A REMARKABLE DIGITAL TRANSFORMATION, WITH PEOPLE
PLAYING A PIVOTAL ROLE IN ITS SUCCESS.
This paradigm shift requires a focus on technology, processes and most importantly, the individuals within the industry. While the pace of this transformation may appear gradual, undeniable progress can be seen through the integration of Building Information Modelling (BIM), digital platforms and the rise of industrialised construction methods. These technological advances have been extensively tested and are readily available for implementation in the construction sector.
However, the true challenge lies in guiding organisations through the digital transformation journey and effectively managing the associated change. Recognising that each organisation is unique, tailored approaches are essential to ensure a streamlined and efficient transition. To embark on a successful digital transformation journey, consider incorporating the following key elements into your road map:
1. Gain an in-depth understanding of the current state of your processes: before initiating any transformation, it is crucial to assess the existing processes fully, identifying strengths, weaknesses and areas ripe for improvement.
2. Develop a digital solutions portfolio to achieve the desired future state: design a comprehensive set of digital solutions that align with your organisation’s goals and vision. This portfolio should encompass various technological tools and platforms that enhance productivity, collaboration and efficiency throughout the construction lifecycle.
3. Evaluate the organisation’s level of digital maturity and conduct a cost-benefit analysis: assess the organisation’s readiness to embrace digital transformation by evaluating its current digital maturity. This evaluation should include an analysis of potential costs and benefits associated with the transformation, ensuring a well-informed decision-making
process.
4. Implement and manage change: execute the digital transformation plan, ensuring that effective change management strategies are in place. Encourage employee engagement, provide extensive training and facilitate open communication channels to support a smooth transition. Sustaining the transformation requires ongoing monitoring, evaluation and adjustment to address emerging challenges and opportunities.
It is important to note that change management is often one of the most demanding aspects of any transformation project. Shifting the mindset of individuals to embrace new ways of doing business involves a cognitive journey, starting from acknowledging initial incompetence to ultimately achieving unconscious competence. Nurturing and supporting employees throughout this transformative process is key to unlocking the full potential of digitalisation within the construction industry.
By embracing the digital era and recognising the critical role of people, the construction industry can revolutionise itself, paving the way for increased productivity, enhanced collaboration and sustainable growth in the ever-evolving marketplace.
Unleashing Innovation: How Start-ups are Revolutionising the Construction Industry
IN RECENT YEARS, THE CONSTRUCTION INDUSTRY HAS WITNESSED A SURGE IN CON-TECH START-UPS THAT ARE HARNESSING CUTTING-EDGE TECHNOLOGIES TO TRANSFORM THE WAY PROJECTS ARE EXECUTED.
This wave of innovation has ushered in new perspectives and fresh thinking, driven by the convergence of technology enthusiasts and the construction sector.
Technology is now being integrated across various stages of construction, revolutionising traditional practices. Our extensive market research reveals the emergence of five prominent types of con-tech start-ups that are leveraging these technologies to reshape the industry.
1. Design configurators: these start-ups are at the forefront of generative design, using algorithms to generate multiple design options based on user requirements and constraints. This enables architects and engineers to explore innovative and optimised solutions.
2. Artificial intelligence: artificial intelligence (AI) has permeated every facet of construction. From computer vision progress monitoring to big data analysis extracted from construction sites, AI is streamlining operations and enhancing decisionmaking processes.
3. Drones and UAVs: start-ups are leveraging the capabilities of drones to revolutionise construction practices. Drones provide accurate volumetric scans of site works, perform automated safety and quality checks, and enable efficient data collection and analysis.
4. Virtual and augmented reality: although their adoption may be progressing at a slightly slower pace, virtual and augmented reality (VR/AR) technologies are making their way into construction. These technologies are employed for progress monitoring, clash detection and immersive visualisation, enhancing communication and collaboration on construction sites.
5. Automation and robotics: robotics is increasingly becoming an integral part of the construction industry. From industrialised construction processes to on-site monitoring, robotics is paving the way for enhanced productivity, precision and safety.
Start-ups are also placing a strong emphasis on sustainability. They use advanced technologies to enable sustainable practices through data monitoring, concrete technology advancements and CO2 capturing techniques. By leveraging real-time data and intelligent systems, construction is becoming more efficient and environmentally conscious.
The rise of con-tech start-ups is driving a paradigm shift in the construction industry, propelling it into a new era of innovation and efficiency. By embracing these technological advancements and sustainable practices, the industry is poised to meet the demands of the future while minimising its environmental impact.
Through collaboration and a willingness to embrace change, the construction industry can harness the transformative power of start-ups and create a sustainable and progressive built environment for generations to come.
Driving Decarbonisation in Construction: Harnessing the Power of Gamification
BUILDINGS CONTRIBUTE 39% OF ENERGY-RELATED GREENHOUSE GAS EMISSIONS.1
With the world’s population projected to approach 10 billion, the global building stock is expected to double in size, amplifying the need to address emissions from the construction sector.
Within the construction industry, achieving “net zero” ambitions is paramount for countries worldwide. While operational carbon contributes 28% of building emissions, the remaining 11% arises from embodied carbon, which encompasses the energy used in constructing buildings and producing building materials. As buildings become more energy-efficient during use, the proportion of emissions from embodied carbon becomes increasingly significant.
To meet government targets and achieve sustainable goals, it is crucial that new buildings, infrastructure projects and renovations attain net zero embodied carbon by 2050. Additionally, all buildings, including existing ones, must strive for net zero operational carbon. One promising avenue to reduce carbon emissions effectively is the adoption of digital twins.
Digital twins leverage real-time data to simulate information from the physical world. By monitoring key environmental factors like indoor air quality, HVAC systems and electricity consumption, digital twins provide valuable insights for reducing energy consumption and emissions. The integration of gamification principles takes this a step further, transforming the collected data into an interactive and user-friendly digital twin environment.
Through gamification, individuals and teams are engaged and motivated to participate actively in sustainable practices. The interactive nature of the digital twin creates an immersive experience, enabling users to visualise the impact of their decisions and make informed choices to reduce carbon footprints. This innovative approach fosters collaboration, empowers stakeholders and drives a collective commitment towards decarbonisation in the construction industry.
By harnessing the power of gamification and leveraging the capabilities of digital twins, the construction sector can pave the way for meaningful and measurable reductions in carbon emissions. This transformative approach not only aligns with global sustainability goals but also creates a more resilient and environmentally responsible built environment for future generations.
1 Bringing embodied carbon upfront – World Green Building Council.
Cybercriminals can target both the private and public sectors1 when exploiting vulnerabilities related to well-being, including healthcare. This broad umbrella term includes pharmaceuticals, research and many other aspects of medical care; indeed, pharmaceuticals are a vital part of healthcare but represent just one piece of the puzzle.
In the private sector, cybercriminals can target companies that offer health-related products or services, such as pharmaceutical companies or health insurance providers. They may seek to steal confidential patient information, trade secrets, or financial data. They may also launch attacks that disrupt business operations, causing harm to patients or customers.
In the public sector, cybercriminals target government agencies responsible for public health and safety, such as hospitals, public health departments, or emergency response systems. They may seek to disrupt services, steal sensitive information, or spread misinformation to cause panic and harm to the public. In 2020, the United States (US) Cybersecurity and Infrastructure Security Agency (CISA) reported increased cyberattacks targeting federal agencies, including a high-profile attack on the SolarWinds software supply chain that affected multiple government agencies.2
All of this means that risk-based strategies and best practices are more critical than ever. A cybersecurity breach or cybercrime can inflict mayhem on any company, compromising assets, laying bare sensitive data, and can even go so far as to result in loss of life by potentially damaging life-saving systems.
Indeed, a cybercrime could cause severe harm or even death to living organisms or critical infrastructure that supports life. Examples of such life systems that could be threatened include the human body’s respiratory, cardiovascular, and nervous systems. The consequences of such events can be severe, potentially resulting in loss of life, widespread destruction, and long-lasting economic and social impacts. Therefore, it is important to respond appropriately to prevent or mitigate the effects of such events whenever possible.
A breach or cybercrime also requires time, resources, and effort to contain, eradicate and mitigate the damage. The healthcare and pharmaceutical sectors represent one area where attacks are rising.
Many industries, such as construction3 and maritime4 businesses, are at heightened risk from cyberattacks and cybercrimes; they are targets of choice for cybercriminals. The healthcare and pharmaceutical industry is one of the most noteworthy industries under attack as it fundamentally ensures our well-being.5 It faces an unprecedented challenge as cybercriminals continue to target it, putting sensitive patient data and research at risk.
During 2022 and 2023, several healthcare providers, including Sharp HealthCare, Choice Health Insurance, Shields Health Care Group, and Alameda Health System, experienced data breaches with patients’ personal information, such as social security numbers, health insurance data, and health records, being compromised. Moreover, the Red Cross and Red Crescent societies across the globe also suffered a complex cyberattack that resulted in the seizure of data belonging to more than 515,000 vulnerable people.6
According to Cybersecurity Ventures, cybercrime damage is set to reach $10 trillion by 2025, making it one of the biggest threats to global businesses.7 The World Economic Forum’s Global Risks Report 2023 also highlights cyberattacks as a significant risk to the global economy, with healthcare and pharmaceutical companies among the most vulnerable.8
A key concern for this industry is the supply chain risk; cyberattacks on third-party vendors and suppliers can potentially compromise the entire chain. In a TechTarget Pharma News Intelligence report, the cybersecurity risk to the pharmaceutical supply chain is estimated to be over $31 million annually.9 It is a significant concern for the industry as supply chain disruptions could have severe consequences for patients relying on life-saving medications.
Historical breaches show that the healthcare and pharmaceutical industry has always been at risk of cyberattacks as many cybercriminals aim to steal their sensitive and confidential data. This can include but is not limited to, prescriptions, research and sensitive patient information because of their valuable data; these companies have become a target of choice for cybercriminals. One type of attack method is phishing with malicious emails, including a link and message to trick the victims into clicking it, the precursor to a ransomware attack. This happened to the Texas-based OakBend Medical Center, which suffered a ransomware attack in September 2022, forcing the hospital’s IT department to take all systems offline and put them in lockdown mode.10 More recently, over 94,000 Florida Medical Clinic patients were notified that a ransomware attack deployed against the provider’s network on 9 January 2023 enabled the attacker to access specific files that contained their health information.11
Just like in most industries, the risk of such attacks is massive for the healthcare and pharmaceutical industry. Ransomware attacks aim to extort ransom from victims using malicious software. If not paid, cybercriminals can threaten to distribute the data on the internet and often go through with it. The attack can cause companies considerable problems, such as computer downtime, financial loss and reputational damage, as evidenced by the phishing email cyber-attack that crippled the Irish Health Service Executive (HSE) in 2021.12 In 2022, a cyberattack on a major IT provider for the UK National Health Service (NHS) was also confirmed as a ransomware attack.13
Hospitals have seen variations of this type of attack. Health systems should review their cyber defences concerning webpages in response to threats from the pro-Russia hacktivist group known as Killnet, who use Denial of Service (DDoS)14 attacks to take down forward-facing webpages and breach protected health information (PHI).15 DDoS attacks create two primary problems for healthcare providers.
First, suppose a DDoS attack disables a hospital’s forward-facing webpage, which could affect appointment scheduling, prescriptions, and other services patients access through the web portal. In that case, hospitals should prepare to conduct these administrative tasks another way.
Second, a ransomware group will conduct a DDoS attack against a target, and while the cybersecurity team deals with the attack, the group will deploy the ransomware. The cybersecurity team is focused on cleaning up DDoS attacks and does not recognise that something else is happening. The real problem arises when patient data is encrypted or stolen and leaked.
The leaking of stolen data has occurred previously. Following the cyberattack on the European Medicines Agency (EMA), a decentralised agency related to the European Union responsible for reviewing and approving vaccines before distribution, monitoring, and evaluating such medicines, cybercriminals leaked Covid-19 vaccination data from Pfizer and BioNTech.16
It is imperative for pharmaceutical and healthcare providers to be aware of the threats to the industry and to have a plan and the right technology in place to identify and mitigate them.
Automation, third-party sellers, vendors, healthcare groups and new technological tools dominate the healthcare and pharmaceutical industry. They are beneficial, as they are needed to maintain vital supply chains and support research, care and development. However, if a third-party vendor is breached, it can damage all its partners. What is more, third-party access and security within healthcare are at significant risk.17 This was evidenced in July 2022, when Infinity Rehab notified the US Department of Health and Human Services (HHS) that 183,254 patients had stolen their personal data. At the same time, Avamere Health Services informed the HHS that 197,730 patients had suffered a similar fate. Then on 16 August, Washington’s MultiCare revealed that 18,165 more patients were affected in the same breach.18
Additionally, introducing the Internet of Things (IoT) in the healthcare and pharmaceutical industry makes it vulnerable to cyberattacks and cybercrimes. They use IoT to streamline and analyse critical patient data while multifaceted procedures are becoming more effective. While IoT devices are firmly associated with the industry19, they increase cyberattack vectors due to vulnerabilities.20
Many companies associated with or within the industry have merged or been acquired by giant corporations. This can harm cybersecurity because subsidiary companies tend not to invest in their security frequently; thus, the well-protected parent companies can find their security compromised. Therefore, companies should proactively prioritise their cybersecurity and cybercrime prevention before taking the necessary acquisition actions.
When organisations take a prevention-first approach to their inclusive business decisions, they will be better positioned to manage and monitor cyber risks proactively rather than react and recover after an attack or cybercrime.21
Preventative and active protection can address these cybersecurity challenges and offer the healthcare and pharmaceutical industry powerful protection against cybercrime and insider threats. To address cybersecurity challenges, healthcare and pharmaceutical companies must proactively approach cybersecurity and implement preventative measures to protect sensitive data.22This can include securing the network perimeter, managing privileged access, implementing a zero-trust network access approach, and securing cloud environments. Cybersecurity and cybercrime prevention training should also be provided to employees to minimise the risk of insider threats.
A cyber gap and impact assessment can identify the risks that are present. Such an evaluation would include the digital and real world to identify any voids that malicious attackers could use knowingly or unknowingly. Once known, the organisation can use its risk appetite to deal with, pay to remove them, introduce compensating measures and controls, or accept the risk.
In conclusion, every industry needs strong cybersecurity and cybercrime prevention to protect its data and vital information from cybercriminals. Especially as companies, healthcare and pharmaceutical are desirable targets for cybercriminals. These companies should take the appropriate steps, identify the risks, make decisions, and implement solutions and best practices to protect themselves. They have an overabundance of confidential data, which, if breached, can cause grave consequences and potentially harm the economy, health and the public in general. By taking a proactive approach, the industry can protect sensitive data and mitigate the risk of significant financial and reputational damage.
14 Distributed Denial-of-Service (DDoS) Attack – it is a cybercrime in which the attacker floods a server with internet traffic to prevent users from accessing connected online services and sites.
Is the global economy, or at least the situation in the West, showing some signs of normalisation, after what has been a difficult period for many? Are we soon to see the back of this period of high inflation? What is the link between these two factors? And what conclusions can we draw from the situation in the real estate sector? In this edition of the Economic Brief, we will delve into these topics to try to find some answers.
Though the criticality of cyber risk is now at the forefront of every economic actor’s mind, the technical complexity of this risk and the constant changes to the threat continue to make it difficult to create insurance products that are adapted to the needs of the client and profitable for insurers.
In light of what can be vital stakes facing market actors, it is necessary to answer the taxing question of cyber risk insurability. To do so, knowledge of the threat and financial quantification must come together to build credible scenarios on which to base the insurance models that will enable stakeholders to agree.
Accuracy combines its skills in financial modelling with risk quantification technology drawn from Citalid’s dynamic analysis of the cyber threat in order to inform its clients’ decision-making in this critical area.
—
Cyber insurance first came to light at the beginning of the 2000s and has since developed significantly, growing from risk coverage linked to viruses and data loss to civil liability and operating losses.
Though the global cyber risk insurance market, representing around $9 billion today, is essentially captured by the US market, the same risk targets European actors, who, as a result, have the same insurance needs. However, the high loss ratio in 2020 in the context of a very narrow client base led insurers to tighten their conditions generally and even to withdraw certain offers. In 2021, the French cyber risk market represented around €220 million in revenue, that is, 3.1% of property damage insurance premiums for professionals, valued at €7 billion over the same period, and 0.35% of property and liability insurance revenues. This low level of coverage extends to only 0.2% of SMEs, against 9% of intermediate-sized businesses and 84% of large groups. Yet, SMEs are the most exposed to cyber risk; available indicators tend to show that 60% of SMEs go bankrupt within 18 months of a cyberattack.
The need for cyber insurance is there, but many factors are hindering its emergence.
First, cyberattacks seem difficult to predict and understand. They can effectively be undertaken in highly sophisticated ways and the modi operandi are constantly changing. Modelling credible risk scenarios is therefore a complex exercise.
Second, the cost of damage caused by a cyberattack can be considerable and often includes financial losses, loss of intellectual property, violation of personal data, reputational losses, and more. The financial quantification of risk scenarios is made all the more complicated by the multiplicity of parameters.
Third, regulations in terms of cybersecurity vary from one country to another, making it difficult to create standardised insurance products for international businesses. Insurers must also comply with regulations in terms of personal data protection, which makes them more vulnerable to legal proceedings in case of violation of privacy.
Finally, the cyber maturity of businesses requiring insurance depends on numerous factors, in particular their IT systems and their internal policies and procedures, as well as how the users of their systems behave. Assessing these criteria – no easy task in and of itself – can often be hampered by the natural reluctance of companies to share details of their IT infrastructure and security policies. Determining the level of risk exposure and the relevant risk premiums is therefore not just a standardised task.
Whilst the cyber insurance market struggles to find its model faced with (i) a risk that is not easily definable and whose impact is difficult to grasp, (ii) a base of insurance clients that is under construction and (iii) a changing regulatory environment, it is urgent to shed light on it through credible risk scenarios and sound financial quantification. That is the objective of Accuracy’s work in partnership with Citalid: it aims to identify attack scenarios by specifying their frequency and the magnitude of losses incurred, in light of the vulnerability of the company under consideration, thanks to the analysis of the company’s cyber maturity and the market in which it operates.
The objective and quantified information obtained from this work is intended to enable the company seeking insurance or wishing to evaluate the quality of its coverage to identify the most appropriate solution for its own configuration. Conversely, for the insurer, it contributes to the construction of relevant and long-lasting offers, as well as to the assessment of the insurability of clients.
This report introduces Accuracy’s key recommendations and guidelines on three specific topics that the construction sector needs to address to move towards sustainable practices and achieve net zero by 2050.
The intent is to outline the areas where COP 28 should focus to enable and accelerate the implementation of best practices in the region, as well as foster innovation within the sector.
For our seventh edition of Accuracy Talks Straight, Delphine Sztermer and Nicolas Bourdon presents the editorial, before letting René Pigot introduce us to Naarea, a startup which aims to design and develop fourth-generation low power molten salt reactors. We will then analyse 5G in Europe with Ignacio Lliso and Alberto Valle. Sophie Chassat, Philosopher and partner at Wemean, will talk about superstructures. Then, we will use the example of motorway concessions to evaluate the risk in business with Bruno Husson, Honorary partner. Finally, we will focus on 2023 with Hervé Goulletquer, our senior economic adviser.
In his 1977
novel The Gasp, Romain Gary already raises the issue of access to a cleaner
source of energy. Gary imagines nothing less than recovering the souls of the
deceased as fuel for machines, leaving humanity faced with the question of what
it is willing to accept morally to pursue its growth and maintain its lifestyle.
And Gary concludes: ‘The paradox of science is that there is only one response
to these misdeeds and perils; even more science.’
Driven by the
broad awareness of environmental challenges, the world of infrastructure is experiencing
multiple revolutions and now finds itself in the spotlight on both the
political and economic stages. From being the poor relation in terms of
investment 25 years ago, infrastructure is now the focus of many expectations.
We see three
major challenges to be met:
• The speed of deployment of a realistic decarbonised energy mix, adapted to local constraints, in which nuclear plays an important role as a primary source where geographically possible. The myriad new promising projects, covering the whole value chain from production to storage and distribution, will find specialised funds revising their investment scopes. Cooperation between political decision-makers and public and private actors will also be a determining factor to accelerate decarbonisation.
• The adaptation of distribution networks (hydrogen and electric) must go hand-in-hand with the adaptation (conversion) or the production of rolling stock in the automotive, railway and later aeronautical industries. Manufacturers and network managers must better cooperate to avoid the mutual wait-and-see attitudes that delay the deployment of solutions.
• The volume of projects necessary and the cash piles available mean that lenders need to rethink their financing execution processes to make them faster and smoother.
We have not forgotten
the importance of waste treatment, water treatment and telecommunication
networks. These critical sectors are facing their own technological challenges,
but they are highly dependent on the quality of energies that they can consume
or on the transport networks made available to them.
The world of infrastructure
has never needed innovation more!
In light of the
energy crisis and sovereignty challenges that have recently emerged, nuclear power
is again the subject of much interest as a form of low-carbon energy. But what
will be the nuclear energy of tomorrow? Next to conventional nuclear power,
which remains the prerogative of governments and government-assisted bodies,
numerous privately financed start-ups have begun operations in the last few
years in this industry, like NAAREA (an acronym of Nano Abundant
Affordable Resourceful Energy for All).
Through its
XS(A)MR (extra-small advanced modular reactor) project, NAAREA aims to design
and develop fourth-generation low power molten salt reactors (< 50 MW).
There are
multiple advantages to this technology – initially developed in the 1950s and
1960s – according to the two founders, Jean-Luc Alexandre and Ivan Gavriloff.
First, this type of reactor, which works by dissolving the nuclear fuel in salt
melted at a high temperature (700°C), is safer thanks to the fission regulation
systems made possible as a result of its smaller size. Further, there is no
drain on natural resources because the reactor uses fuel from existing reserves
of nuclear waste and thorium (the concept of waste-to-energy), thus limiting
any dependence on uranium suppliers. In addition, the waste resulting from the
process is very limited, reducing the risk of dispersal and issues linked to
storage. But the main advantage of this innovation, and also what
differentiates it from other major projects in the US or China, lies in its
very small size. With a highly compact volume, not far from the size of a
shipping container, the reactor can be deployed in an independent and
decentralised way, making it possible to be closer to industrial consumers,
without needing to reinforce current distribution networks or access to water.
Ultimately, this type of reactor should ensure a more affordable energy price
than that resulting from fossil fuels and renewable energy for an autonomy of
up to 10 years.
NAAREA’s
objective is to produce and operate directly its micro power plants in large
quantities and to sell the energy produced to industrials. This positioning as
a service provider is much more engaging and therefore reassuring for nuclear
safety authorities, as it avoids the proliferation of nuclear operators. Here
again, it differs from the numerous other competing projects that hope to
provide the solution.
In exchange,
this project requires significant investment. Today, the start-up has already
raised tens of millions of euros from family offices and has built partnerships
with major industry players: the CEA, the CNRS, Framatome, Orano, Dassault
Systèmes, Assystem. Indeed, with Assystem, it is in the process of producing
the digital twin of its reactor, scheduled for summer 2023. Participating
notably in the ‘Réacteurs nucléaires innovants’ call for projects as part of
the ‘France 2030’ investment plan, NAAREA hopes to raise hundreds of millions
of euros more to be able to build its prototype by 2027 and its first unit by
2029.
In 2019,
Europe proclaimed the advent of 5G, a new communication technology that would
disrupt the way businesses and individuals interact with each other. Four years
later, the deployment of the necessary infrastructure is far from complete, and
companies at the forefront of its development appear to be taking things easy. Why?
Because return on investment is unclear.
However, some
experts see signs indicating that the current landscape is about to shift. New
types of companies, born from new technologies such as augmented reality,
autonomous devices or remote working, are demanding more data at faster rates and
with new functionalities.
This
gives rise to some questions: what is the enabler that could accelerate 5G penetration?
Who will benefit most from 5G? How will investors recover their investment?
1. Are we facing the true shifting point for 5G adoption?
Ericsson estimates a significant increase in 5G subscriptions from now until 2028, driven by the availability of devices from several vendors, the reduction of prices and the early deployment of 5G in China. This massive worldwide adoption will lead to:
(i) an increase in the number of users to five billion (80% of traffic as video); (ii) a more balanced distribution of users across continents; (iii) mobile traffic data increasing from 15 EB per month to 225 EB per month worldwide.
Figure 1 – 5G penetration rate by region
Figure 2 – Breakdown of 5G subscription by regions
Figure 3 – 5G mobile traffic data worldwide
This shift must be fuelled by the many advantages of 5G over 4G, which rely on the increased upload/download speed – up to 1 GB per second – its spectrum, capacity and latency.
Table 1 – Overview of 5G characteristics vs 3G and 4G
Another advantage
lies in its energy efficiency; figures prove that 5G technology consumes only 10%
of the power used by 4G.
Further advantages
include (i) reduced interference, (ii) improved security and (iii) connection
to newly developed products.
2. What is impeding the deployment of its infrastructure in Europe?
It is difficult to provide an accurate and up-to-date number of base stations currently in operation in Europe because the deployment of 5G networks is ongoing and varies significantly country to country.
Table 2 – 5G Band types and characteristics
Several external
factors have impeded a robust deployment, factors that can be segmented into
two categories.
Business factors:
• The unclearmonetisation strategy for network developers, with no concrete business case providing a sufficient return.
• A weaker tech ecosystem pushing for an accelerated adoption of 5G to serve their needs (e.g. immersive technologies).
• Challenging access to hardware, particularlysincethe US government blocked global chip supply to the blacklisted telecommunications giant Huawei.
• The slow replacement of old generation devices.
Political and
administrative factors:
• The highly fragmented European market,with hundreds of operators – in the US or China, there are three large operators with the critical mass to invest.
• Higher European administrative and bureaucratic hurdles to network providers.
• A lack of process standardisation – operators struggle to find deployment synergies due to the high level of competition in each local market.
• A weaker political push versus Asia where technologies are leveraged to monitor and control populations.
3. What is Europe doing to overcome these challenges?
The deployment of 5G in the EU is developing heterogeneously across its members. By the end of 2020, 23 member states had activated commercial 5G services and met the goal of having at least one major city with 5G accessibility. Nevertheless, not all national 5G broadband plans include references to the EU’s 2025 and 2030 ambitions.
Figure 5 – 5G roll-out in Europe
The European Commission has estimated that 44% of all mobile connections in Europe will be under 5G in three years. This objective has led the Commission to put in place many initiatives to accelerate the roll-out of 5G in Europe, proactively searching for solutions and working to enhance deployment. These initiatives have contributed to the fact that by the end of 2021, the EU had installed more than 250k 5G base stations, covering 70% of its population. However, this coverage lacks much of the functionality and data transfer speed attributed to the new technology.
4. Is 5G a lucrative business?
The
primary issue for operators is how to recover their investment. The way in
which operators monetise their services is expected to change from their
traditional business models; hence, monetisation strategies and 5G roll-out
business cases will need to be revisited.
For
example, the investment required to upgrade existing networks to 5G networks will
amount to c. €5 billion in Spain alone, plus an additional €2 billion estimated
by the Spanish government to drive the roll-out of 5G between 2021 and 2025. A
total amount, therefore, of €7 billion.
Operators
are aware of the inevitable increase in operating and capital expenditure.
However, there is no clear path to making the investment profitable. For operating
expenses, the collaboration of neutral infrastructure operators is crucial, as they
represent (i) enablers of a traditional 5G deployment, (ii) the perfect
partners for the scenario of large data consumption and ultra-high speed rates
and (iii) an essential resource to make these investments less prohibitive for
operators. For capital expenditure, operators are experimenting with different
options to seek a return throughout the value chain, some of which take them far
from their comfort zones. For example, Telefonica is studying the monetisation using
APIs, which enables users to pay for different configurations of speed, latency
and service quality,
In addition, network operators are looking for alternatives through partnerships with new technology providers to: • better understand each other’s needs and capabilities; • allow manufacturers to improve efficiency; • avoid commoditisation and provide high value-added services around SaaS, PaaS or NaaS; • better share earnings from these new services.
5. Who might be the “winners” of a successful 5G deployment?
Figure 6 – 5G value chain and its main players
While network
operators are struggling to find a way to monetise their investments, other players
are poised to extract value from the deployment of 5G:
• Tower/infrastructure providers – the construction of a large number of towers and base stations may be necessary in certain regions. However, the costs of dismantling obsolete infrastructure should also be considered.
• OTTs and other service providers –suppliers ofautonomous driving, smart cities, Industry 4.0, telehealth, smart farming or entertainment, for example.
• Equipment providers – component and module suppliers, machinery and industrial automation companies and manufacturing companies for visual quality checks.
6. Conclusion
Tower
constructors and network operators are heading towards a new era of
communications, but the challenges they are facing are significant. The current
low level of demand is impeding the adoption of this new technology, and some investors
feel it does not justify the huge investment.
Meanwhile,
technology costs have increased in what has become a global competition for
hardware and chips. European players face fierce and imbalanced competition from
Chinese and US operators that benefit from better equipment sourcing, more
favourable administrative frameworks, higher economies of scale due to market
consolidation and higher demand fostered by their dynamic tech ecosystems.
On the one hand, building infrastructure requires
scale and long-term economic vision; on the other, technological disruption requires
agility. Today, the European ecosystem has proved to be weak when faced with this
paradox.
However, European
institutions have started to realise the extent of this gap and have begun to show
significant support for the technology. Nevertheless, this will not be enough for
its deployment, and private operators must raise their game.
For 5G to
become a reality in Europe, all stakeholders will have to collaborate, sharing
their knowledge and future profits. This means designing, modelling and
implementing major strategic alliances
between European operators. This also means sharing investments and designing
smart profit sharing with downstream tech ecosystem players benefitting from 5G
expansion.
Sophie Chassat Philosopher, Partner at Wemean
Disruptive superstructures
If technical and
technological innovations require disruptive infrastructures, society will also
need to equip itself with ‘disruptive superstructures’. What is a
superstructure? It is the intangible equivalent of infrastructure, that is to
say, the ideas of a society, its means of expressing itself (art, philosophy,
morality) and its governmental institutions, as well as its cultural and
educational institutions.
It was Karl Marx
who invented this coupling of concepts to show their deep co-determination: a
society’s ideological superstructures depend closely on its tangible and
economic infrastructures – and vice versa. For example, the industrial
revolution gave rise to developments in both infrastructure (technical
innovations, mechanisation, division of labour, etc.) and superstructure
(liberalism, rationalism, bourgeois morality, etc.), which reinforced each
other.
What disruptive
superstructures will we need to support the tangible and economic changes of
our time? Though we know nothing for certain, what we do know is that our
current superstructures are no longer appropriate. This is the starting point
of an excellent TedX by Sir Ken Robinson on our educational systems: the
paradigm on which they are based is still that of the industrial age.
Indeed, our
educational system was conceived in the 19th century in the economic context of
the industrial revolution. Logically, school is therefore organised to prepare
pupils for this system of production: bells ringing, separate installations,
specialised subjects, and standardised study programmes and tests. This is what
Sir Ken Robinson calls ‘the factory model of education’.
The educational
system inherited from the industrial age has one fault in particular: it kills
creativity and divergent thought. And yet, this is something that our age, one
of the most stimulating in history, needs now more than ever! That is why a radical
change of paradigm in this area is essential, and it takes place in three
steps. First, end the myth that there is a division between the academic and
the non-academic, between the theoretical and the concrete; in other words,
stop separating education from life.
Second, recognise that most major learning is done collectively –
because collaboration is the basis of progression – rather than encourage
individual competition between pupils.
Third, change
the thought patterns of those who work in the education system, as well as the
architecture of the places where they work. The philosopher Michel Foucault
already noted profound resonance between the spatial and temporal organisation
of factories and schools. Inventive disruptive infrastructures will, in turn,
have to match these new disruptive superstructures. What will the schools of
tomorrow look like? Where will they be? Some, like Sugata Mitra in India, see
them in the cloud; others see them in the middle of nature, like the forest
schools that are flourishing in Europe. But why don’t we ask our children and
young people what they think? Creativity is after all their area of expertise,
no?
Bruno Husson Honorary partner, Accuracy
Time and risk in the valuation of a business – The example of motorway concessions
The value of an asset can be
easily approached by the prices observed on a market where comparable assets
are exchanged. This is the analogical approach to valuation. An alternative
solution consists of replicating in a valuation model the way in which the
market forms these prices. This is the intrinsic approach to valuation and the
founding principle of the discounted cash flow (DCF) method, the prominent method
associated with this second approach.
The two key parameters of
any financial valuation: time and risk
The starting point of the
intrinsic approach to valuation is the definition of the concept of financial
value. According to this concept, the value of an asset is based on the cash
flows that the asset holder is likely to receive in the future. As these cash
flows are spread over time and subject to risk, the valuation model must
necessarily incorporate the behaviour of the investor with regard to two
parameters: time and risk. Financial theory indicates how to incorporate time
and risk in isolation, that is, how to incorporate time without consideration
of risk and how to incorporate risk as part of a single-period model (i.e.
without consideration of time).
When considering time, valuation
models use the discounting technique, that is, the commonly accepted assumption
that an individual has a preference for the present. On financial markets where
we implicitly exchange time (i.e. a sum of money held today against a sum of
money available at a later date) through the acquisition of securities
considered risk-free (treasury bills or government bonds), an individual’s
preference for the present is logically reflected by the existence of a
positive interest rate. This risk-free interest rate crystallises a fundamental
principle of finance: the time value of money (one euro today is not the
equivalent of one euro tomorrow because the euro received today, deposited at
the risk-free rate, will give more than one euro tomorrow). Based on this
principle, the discounting technique makes it possible to aggregate cash flows
(assumed to be risk-free) occurring at different dates, bringing them back to
today’s date by means of the interest rate, and finally to determine the value
of the asset associated with this series of cash flows.
When considering risk, the valuation
model frequently used by valuers is the Capital Asset Pricing Model (CAPM).
This model relies on segmenting risk into two components: (i) the specific risk
(or diversifiable risk), which asset holders can eliminate by diversifying
their portfolios, and (ii) the systematic risk (or undiversifiable risk), which
even the investor whose portfolio is perfectly diversified must bear. According
to the CAPM formula, the return required on a financial asset is equal to the
risk-free interest rate plus a risk premium that depends only on systematic
risk (the market does not remunerate the diversifiable risk). Thanks to CAPM,
we know how to calculate the value of an asset generating a risky cash flow
over a single period: it is the average cash flow (or ‘expected’ cash flow)
discounted at the rate of return given by the formula. The two components of
risk linked to the assets are taken into account: the specific risk through the
calculation of the expected cash flow (i.e. in theory, the average of the
expected cash flows in the various possible scenarios, weighted for the
likelihood of occurrence of these scenarios), and the systematic risk via the
discounting of the expected cash flow at a rate incorporating a risk premium.
However, in practice, it is
necessary to take into account the point that the valuations might be for
entities generating cash flows over several periods (or even in perpetuity).
This leads valuers to step away from the theoretical context mentioned above to
incorporate both the time and risk parameters in the same model (in other
words, combining risk with time).
The usual way of integrating risk in the discount
rate can lead to a significant underestimation of the entity being valued: the example
of motorway concessions
The typical approach used by
valuers to incorporate risk consists of transposing the CAPM into a
multi-period framework. More concretely, the price of time and (systematic)
risk are considered simultaneously over the lifetime of the entities being
valued via the discounting of expected future cash flows at a single risk rate.
This rate is equal to the risk-free interest rate increased by the (constant)
risk premium from the CAPM formula.
The alternative approach
incorporates successively (and not simultaneously) the time and risk
parameters: first the risk parameter via the determination of the ‘certainty
equivalent cash flows’ and second the time parameter via the discounting of
these cash flows at the risk-free interest rate. The certainty equivalent cash
flows incorporate the entirety of the risk and are therefore lower than the
expected cash flows that only incorporate the diversifiable portion of risk.
The difficulty in the
alternative approach lies in determining the adjustment coefficients to be
applied to the expected cash flows in order to obtain the certainty equivalent
cash flows. These coefficients can be estimated within the theoretical
framework of the CAPM, but the calculation formula, which is rather convoluted,
proves inapplicable in practice. It is also worth highlighting that, in the
context of a business valuation, the valuer must first appreciate the level of
optimism of the business plan, before even considering the risk integration
mechanisms. If the valuer considers that the business plan represents the
average scenario associated with the expected cash flows, he or she can then
either discount these cash flows at the CAPM risk rate or determine the
certainty equivalent cash flows and discount them at the risk-free interest
rate. If the business plan appears rather conservative, or even pessimistic,
the valuer cannot implement the usual approach without adjusting the business
plan cash flows upwards; however, he or she can directly choose the alternative
approach by considering that the business plan cash flows provide a reasonable
estimation of the certainty equivalent cash flows.
The usual approach to
integrate risk can be criticised because, by using the discounting technique to
combine risk with time (though this technique is, in theory, only supposed to
take into account the time value of money), it implicitly makes a significant
assumption on the development of the systematic risk by assuming that this risk
increases considerably with time. The alternative approach appears more solid
because, by dealing separately with the issues related to the incorporation of
time and risk, it does not make any assumption on the development of the risk. This
allows all valuation cases to be handled rigorously and in particular the
valuation of activities that benefit from good visibility over a long period
(for example, infrastructure projects) and for which the assumption of a risk growing
with time is particularly debatable.
By way of illustration, let
us consider a motorway concession likely to generate on average an annual cash
flow of €800m over a period of 30 years (inflation is assumed to be nil). Based
on a (real) risk-free interest rate of 1.5%, an asset beta of 0.5 and a market
risk premium of 5.5%, the rate of return given by the CAPM formula amounts to
4.25% and the value of the concession using the typical risk integration
approach comes to €13,423m (value of annual cash flow of €800m discounted at
4.25%). Given the good visibility of the revenues, which despite a relatively
fixed cost base grants the activity a low systematic risk (confirmed by the
beta coefficient of 0.5), it seems reasonable to base the determination of the
certainty equivalent cash flows on a constant abatement coefficient of 0.15. On
this basis, the value of the concession using the alternative approach comes to
€16,331m (value of the annual certainty equivalent cash flow of €680m
discounted at the risk-free interest rate of 1.5%). The gap in the estimated
values given by the two approaches amounts to around 22% and comes from the
implicit assumptions made on the development of risk over time. With the
alternative approach, the risk is assumed to be unchanging (the deduction on
the expected cash flow for risk is 15% in any given year); with the usual
approach the risk increases significantly with time (the deduction for risk
thus grows from 10% in year 4 to 21%, 31%, 40% and 50% in years 9, 14, 19 and
26 respectively, i.e. a very significant increase that the risk profile of the
activity cannot justify).
In conclusion, adopting the usual risk-integration approach to value activities that benefit from good visibility over a long duration is debatable and can lead to significant undervaluations.
Emerging markets and developing economies (EMDEs) have struggled through the succession of crises experienced over the past three years: health (Covid), geopolitical (the Russo-Ukrainian war, tensions in the South China Sea and growing Sino-American rivalry), economic (the return of inflation) and financial (the rise in interest rates and the dollar in a context of (primarily public) debt that calls for vigilance). Their growth, if we exclude China, declined more in 2020 than that of advanced economies, and the subsequent rebound was more modest. The gap will not be filled this year or next.
Monde : les économies les moins développées souffrent le plus
Investment in infrastructure in EMDEs seems to have particularly suffered from this rather unfavourable dynamic. If we believe the Global Infrastructure Hub (November 2022), investment in this area in 2021 grew by 8.3% in high-income countries and fell by 8.8% in their medium- or low-income equivalents. Indeed, in that year, 80% of infrastructure projects were implemented in developed economies. How can we then not intuit that the return to more confidence for growth prospects is a prerequisite to the recovery of infrastructure investment in the EMDEs!
One more thing
before looking ahead: the multiplying effect of the change in financial
conditions on the growth profile needs to be measured. The tightening of monetary
policy by the main central banks makes financing EMDEs much more difficult.
And this observation is all the more pertinent given that their credit rating
is weak. According to the World Bank’s findings, bond issues for all countries
concerned fell by USD 250 billion in 2022 (much more than during the crises
scattered over the past 15 years!), whilst the differences in sovereign spreads
increased by 1,740 basis points (17.4%) for low-rated, energy-importing countries.
According to
the IMF, the economic growth of EMDEs is projected to stabilise at around 4%
this year and next year too. With the aid of a
magnifying glass, we might discern a very slight upward trend (respectively
+4.0% and +4.2%, after +3.9% in 2022). But the cloud of uncertainty, at such a complex
time for the global economy, doubtless overshadows the extent of this
acceleration. Though the quantification proposed may seem enviable compared
with the projected performance of advanced economies, it is somewhat lacklustre
compared with past performances closer to 5.5%.
Où va la croissance de la zone émergente ?
How then should we understand what might appear like a reserve in the IMF’s assessment?
First, there
is the nature of the rebound expected for the Chinese economy. The announced return to better fortunes represents good news for
the rest of the world. Fine, but to what extent? To answer the question, we need to push a
little further our understanding of the economic recovery over there. Its
origin lies in the removal of restrictions placed on the movement of people.
The direct beneficiaries will therefore be those people. As consumers, they
will most likely favour services. This is what we were able to observe in
Europe and the United States, after all. Moreover, it seems reasonable to
prioritise the assumption of measured support through a proactive economic
policy. Favouring the trio ‘consumption
– services – limited support via economic policy’ ultimately means not
following the typical pattern of a Chinese recovery. This time, it is the
result of fiscal and monetary stimulus, debt and investment. This difference
between today and yesterday makes it easy to see the limits to the benefit that
other countries should derive from China’s announced improvement. A quick
glance at the composition of China’s imports shows this. The proportion going
to households is small.
Then, there
is US monetary policy. Its calibration conditions
both part of the movement of the interest rate curves across the world and the
level of the dollar against numerous other currencies, does it not? Though it
is possible to consider that the majority of the rise in the Federal Reserve’s
base rate has been undertaken (it is now on average at 4.63%), two aspects
should be detailed: where will the ceiling be for the current phase of rises and
how long will it stay at that level? Faced with inflationary pressures that send
no clear signals of slowdown and with a labour market that is still tight, we might
want to answer higher and longer than the market consensus estimates. We must
therefore conclude that the US interest rate environment, if it becomes less
adverse than it was, will not be immediately conducive to the formation of beneficial
financial and economic conditions for EMDEs.
Lastly, there is the ability of each emerging or developing country to relay, through its own monetary policy capacities, the initiatives taken by Washington. That depends on the economic and foreign exchange balances. The situation is quite variable from one economy to another. If we rely on the sample presented below, only a minority has a real downside potential other than marginal on its policy rate, as long as the US situation allows it.
A la recherche des marges de manoeuvre pour baisser les taux directeurs dans le monde émergent
And then,
stepping away from the economy, but bearing in mind the implications that may
appear in this area, how can we not be interested in the political
developments to come!In this
area, a certain number of topics should be closely monitored; some are already
old and therefore well identified, whilst others have until now drawn less
attention:
• China: risk of conflict with Taiwan, US economic sanctions, increase in youth unemployment and undersizing of the retirement system
• Brazil: risk of political instability following Lula’s election
• Saudi Arabia: rapprochement with Russia and strategy to reduce oil production, considered hostile by the United States
• Israel: risk of war with Iran and consequences of the arrival of the far-right to government
• Russia: continuation of the war in Ukraine
The electoral cycle, with in particular the presidential and/or parliamentary elections in Nigeria, Turkey, Argentina and Lebanon.
This multidimensional view leads to the conclusion
that the auspices for 2023 do not appear particularly favourable for the
economies of EMDEs. However, the
capital markets are sending a much more optimistic message. Since last autumn, emerging zone bonds and equities have performed
well compared with those of the developed world, especially the US. How can we
explain this contrast?
In fact, the
investors and market operators have gambled that the world economy, and
particularly the US economy, will not fall into recession. Despite often low unemployment rates, the central banks will
succeed in bringing inflation to levels more or less compatible with the
defined targets, and without leading to a fall in activity for several
quarters. In this context, the appetite for risk comes back and the emerging
markets are taking advantage of it!
Des marchés obligatoires émergents qui reprennent des couleurs sur fond de taux longs américains mieux orientés
Retour à meilleure fortune des marchés actions émergents ?
On what side should we come down: the fundamental analysis or the view of the financial markets? It is very difficult to say! Let us simply remember that the bets for a soft landing of the economy are difficult to win, and doubtlessly even more so when the labour markets remain tight.
In this edition of the Economic Brief, we will analyse economists’ forecasts for economic growth in 2023. Comparing visions for quarter 1 with those for quarter 4, we will consider how growth profiles for Western economies will evolve over the course of the year and look into reasons for any particular developments. We will also consider the priorities of the central banks, notably the Federal Reserve, and the actions it may take for the year.
Stopping financial crimes is a multifaceted challenge, and the increase in cyber threats has made this significantly more complicated. According to PwC, external economic crimes cost the firms they surveyed $42 billion annually.1 As investigators quickly navigate change, bad actors look to exploit any widening gaps in investigation capabilities. As reported by the World Economic Forum, we are gradually seeing the rise of risk management rather than compliance-run cybercrime strategies to combat these threats.2
Do the public and private sectors need a more proactive and focused approach to cybercrime risks and threats, especially concerning investigation intelligence?
Generating intelligence involves collecting information on cybercrime from a wide range of private, public and open sources and then processing and analysing that information. The objective is to enhance and grow intelligence, which will help the investigator fight cybercrime as effectively as possible.
Investigators and Intelligence
The cybercrime investigator is at the forefront of the fight against financial crimes, undertaking an array of intelligence collection and investigative tasks. This involves using multiple analytical platforms, investigative tools, open-source intelligence, and other tools, which are constantly evolving. Thus, the techniques and tools keep changing. It happens often, investigators’ desks jumbled with paperwork, computers with multiple standalone applications and separate computer screens to keep up with the volume of wrongdoing.
Cybercrime investigations continue to be hindered by a fragmented approach. Numerous digital devices have proprietary operating systems and software that require specialised forensic tools to identify, collect, and preserve digital evidence. The time taken to investigate everything can seriously hamper or delay the identification of crucial evidence in an investigation.
Much of this approach is driven by the low quality and ungraded intelligence that investigators receive. Poor source material and inadequate analytical platforms can, in turn, generate a ‘throughput mentality’ amongst investigators, where the focus is less on the quality of the investigation and more on ensuring that volume-based targets have been met.
Intelligence grading, however, is a fundamental step in the intelligence process by investigators and others. Why? So, anyone reading it can have the confidence to depend on it. Once gathered, intelligence should go through a grading process where a handling code is attached to the content as part of an initial risk assessment process. Grading intelligence allows for a quick and accessible expression of this intelligence source risk assessment and sanitisation to protect that source. Based on this grading, an investigator can begin prioritising devices to review.
Cybercrime Investigations and Intelligence
Financial cybercrime investigations happen regularly within financial institutions (“FIs”). Their quality can vary widely in many instances because FIs often do not work with optimal analytical and forensic tools or intelligence sources. Many FIs still use standard word processing and spreadsheet applications to collate, store, and analyse customer behaviour or first-generation analytical platforms to identify vital transactional relationships. Investigators must also work through various disconnected internal and external sources to collate material for their investigation.
The proliferation of separate data sources and intelligence streams can decelerate the investigation process, creating voids that allow mistakes and potentially missed connections that will undermine the value of the final assessment produced by the investigator. This ‘multiple system’ approach makes the investigation process more challenging to record, manage, and ultimately audit.
Despite efforts by regulators and authorities to continuously develop their strategies to fight financial cybercrime, bad actors continue to adapt, leading to more sophisticated threats and attacks in all areas of financial cybercrime. There are also significant challenges with ensuring that investigators get the right kind of contextual intelligence delivered in a way designed to support robust outcomes. Some of the problems an investigator may face include:
• Quantity of intelligence – this is a matter of either feast or famine, especially concerning the open source intelligence (“OSINT”) framework.3
• Quality of intelligence – the provenance and reliability of material are often hard to assess and lead to extended and fruitless efforts to corroborate information.
• Consistency of intelligence – for example, open-source searches on the internet will often produce results that vary depending on location, past user history, or what online vendors might be seeking to sell. The internet is not designed to help investigators find information.
• Security of investigations – some investigations can leave potentially glaring online footprints on more sensitive sites such as social media platforms.
• Use of digital evidence – the primary goal of digital forensics is to extract data from the electronic evidence, process it into actionable intelligence, and present the findings.
Evolvement of Investigations
Experienced and well-trained investigators can help mitigate some of these problems, but even the best investigator must be equipped with the correct data and tools. A significant development is the advent of ‘intelligent automation’, which combines artificial intelligence and robotic process automation to put investigators at the centre of seamless and carefully curated intelligence environments.
These environments make the collation, analysis, and assessment of material as frictionless as possible by:
• Producing an all-around view – rather than following numerous lines of intelligence on various platforms, investigators can work through a primary platform that brings together the right kind of graded intelligence – internal and external – in a single space in the shortest time possible.
• Refining the intelligence deluge – instead of chasing down untold disparate sources from scratch, graded and improved intelligence allows an investigator to start with a solid set of foundation stones.
• Developing a structured approach – investigations can rapidly become unstructured as they move across different evidential sources. Having a structured dashboard will facilitate a more systematic approach.
• In this modern age of technology, digital evidence is an integral part of an entire intelligence and investigation process. It helps investigators capture critical intelligence and evidence from computer systems and networks. Digital intelligence and evidence are growing exponentially and must be managed appropriately throughout an investigation.
The decisions made by financial investigators within FIs are incredibly significant and can influence management decisions or the future of a customer relationship. If genuinely suspicious activity is missed due to poor investigative practices, poorly equipped analytical platforms, or inefficient use of intelligence, a crime that might otherwise have been detected and disrupted will be allowed to proceed. Having the best investigative tools to hand enables organisations to make these kinds of risk-based decisions confidently. At the same time, similar failings could lead to an innocent person being treated as a subject of concern, raising issues of fairness and having implications for financial inclusion. Whether the investigator gets it right or wrong will significantly affect how well an FI tackles financial crime, keeps its reputation intact, reduces monetary loss, and protects its customers.
Way Forward
Investigator-led approaches have better success against high-profile risks using improved and integrated intelligence settings within the intelligent automation process. Their deployment within the financial services sector for cybercrimes is already beginning to show fruit, primarily when allied with closer cooperation across private and public sectors in sharing risk information. As financial cybercrime practitioners focus increasingly on delivering results, it seems evident that empowering the investigator with the right tools to automate, collate, and grade intelligence will significantly aid the quality and efficiency of investigations.
Vendors and investors usually define a period of time during which different analyses of the assets covered by the potential transaction are concentrated. Enshrined within the term due diligence, the reflections carried out may be of various natures. While financial, tax, and legal are the classics of the genre, a new discipline is beginning to emerge: analysing the enterprise’s exposure and compromise to future or past cyber attacks. While the principle seems simple, cyber due diligence actually hides a variety of very different and complementary aspects, far beyond solely the technical issue. The digital revolution means redefining the scope of the risk for the enterprise.
The angle selected here is that of the buyer analysing a target. However, any areas discussed in this context could enrich an analysis conducted by a seller prior to a transaction.
Understanding the target environment
Just as the financial analysis of an enterprise or an asset is informed by its confrontation with market comparables, cyber analysis is greatly enriched by contextualisation of the threat facing the entity. Also, before undertaking a cyber resistance analysis, it is relevant to understand the ecosystem in which the enterprise operates and its potential particular features. Using these benchmarks, analysis of existing cybersecurity investments and mechanisms helps to better estimate cyber risk awareness and the level of maturity of the enterprise. The challenge is then to assess whether this perceived level of risk is appropriate in relation to the threat and the investments made.
The threat is not the same for everyone. Although any structure with systems connected to the internet faces systematic threats, with phishing attempts being the most known, some assets have to be concerned with more targeted attacks: particularly lucrative sector, strategic or politically controversial activities or entities and sectors known for their low technical investment, etc. Depending on the complexity of the case, this first step in a comprehensive cyber analysis requires linking various competencies. These include knowledge of the sector-specific threat history, an enterprise’s reputation and incidents, and even geopolitical decoding of its market and the country it is based in. The level of threat related to the conflict between Russia and Ukraine could more broadly impact allied companies on an economic or a logistics supply level (energy, military, humanitarian, etc.). On another level, we note that the hospital sector is largely targeted by ransomware in view of their exposure, but also by the amalgam that exists without a doubt with attackers between the French public hospitals’ model and that of American private hospitals. In the context of a sensitive activity (the concept of a significant or essential entity has been re-specified again recently by the European NIS2 directive – energy, transport, finance, health players, etc.) the assessment even benefits from relying on a comprehensive analysis of the risks faced by the enterprise, among which cyber risk is only one element. This must therefore be understood as both a risk in its own right as well as a supporting risk for the scope of other risks. Thus, a vulnerability on a server exposed to the internet can lead to a risk of major media exposure following a disclosure of the data. Internally, failure to manage permissions on an accounting application can alter the truthfulness of the enterprise’s financial statements.
A valuable component for financiers, this first step can lead to a comparative study of losses in the sector caused by cyber attacks, thus providing a first quantified conclusion.
While it is currently rarely practiced, particularly due to the wide skill set required, this preliminary step of threat analysis as a whole provides highly innovative enlightenment for most economic players. In all cases, it allows us to address the analysis of the enterprise’s internal mechanisms, this being the subject of the next step, with a much more detailed understanding of its issues, in its market and its geography.
Defending the castle…
This is the most obvious point of attention and is the best known: are the target’s information systems normally robust, i.e., can they withstand reasonable attacks? The buyer, beyond the security issue of its asset, is wanting to know whether the target will require additional investments to improve its shielding or to align it to a level of security equivalent to its other assets. Price is therefore at stake and this new element tailors it according to its conclusions, opening up a new field of negotiation.
Unsurprisingly, the issues raised by cyber due diligence bring up issues of normativity and market practices. In fact, absolute protection against cybercrime is almost impossible to guarantee, everything being a question of the time and resources deployed by attackers. So, it’s about judging the reasonable resistance and detection capabilities of a system against professional attackers, but also looking for efficiency to turn to easier targets. When it comes to cyber, like elsewhere, you need to know how to calibrate your response to have a level of security within a mean that is highly proportionate to the risk and threat level assessed in the previous step.
In the due diligence phase, it is therefore a question of usefully informing the buyer about the level of embedded risks without unnecessarily causing alarm. The knowledge of the market and the average level of protection deployed, acquired in the preliminary phase, will therefore be a key aspect of the quality of the analysis proposed to maintain a reasonable recommendation, which informs the negotiations around the price in a balanced manner.
To do this, what practices are accepted in the due diligence phase to conduct an internal analysis? Two main types of work will be differentiated:
• It may first be an analysis on a documentary or reporting basis: understanding of systems, regularity of maintenance work and backups, tests, conclusions of tests carried out and corrections made, level of obsolescence of the installed base and technical debt, structuring of IT teams, etc. This work shows an overview of the attention paid and the resources deployed by the target to cyber risk management. This operating procedure is frequently accepted by sellers. It requires approximately two to four weeks of work, which is perfectly compatible with the time allotted for other due diligence. This work often involves good coordination with the target’s teams in order to initiate an effective dialogue.
• Less common, the Penetration Test Exercise (Pentest) seeks to obtain an independent measure of the stability of the device by looking for vulnerabilities, i.e., attempting to break into the target’s systems, either from outside or with access to the corporate network. This work stops when access is gained and never reaches the point where information is copied. However, these tests are obviously more intrusive and involve having a formal agreement from the seller. They are limited in time and scope: implementation is ultimately simple and quick, as it does not require the use of the target’s teams, and the target can also continue its routine business. These situations are common, or even regulatory, when cybersecurity is at the core of the enterprise’s business or in particularly exposed sectors. It is obviously more difficult for the competitive processes of M&A to lend themselves to this mode of due diligence, unless they switch to exclusivity, usually giving a few additional weeks to carry out these investigations.
…but for what treasure?
This is the last aspect of cyber analytics. After the resistance of the perimeter wall, it should be ensured that assets on sale have retained their value and are not in the process of losing it or even worse destroying it. We will also differentiate two types of work, both radically different:
• On the one hand, the search for leaks of sensitive data that may already be circulating on the internet, or even for sale on criminal forums,
• On the other hand, the search for compromises, i.e., past or existing intrusions within the systems to place active or prepositioned malware there pending its activation (espionage, encryption & ransomware). Again, high-exposure or hightech sectors incorporating R&D and intangible assets are particularly exposed. It is virtually no holds barred: for example, the widespread adoption of managed services has promoted the emergence of so-called “supply chain” attacks for a few years. They consist of introducing themselves to a subcontractor or, in this case, to an enterprise in the process of being acquired, with a view to subsequently moving up to the final target. This attack also has a multiplier effect for the cybercriminal, as it makes it possible to target all of the customers of this subcontractor.
Conclusion
The reality of the cyber threat and the awareness of investors and officers are greatly accelerating the development of this work. Cyber threat intelligence (CTI) analysis shows that investment funds are targeted like other sectors. Who knows whether attackers could imagine that these companies are more willing to pay ransom demands?
Today, the customers of this due diligence are frequently former victims who have painfully become aware of the issue of these analyses. The level of maturity of officers on this subject is still variable and the understanding of cyber issues too often confined to ransoms alone. The behaviours are therefore still imbued with the “alarm syndrome” where the installation occurs most of the time… after the theft.
However, the digital revolution delivers a succession of new challenges: large-scale industrial espionage, private or stateowned, business paralysis, data commoditisation and privacy protection are issues that far exceed mere hacking and require data management to be integrated into all stages of business development. This applies to the protection of the business model, its employees and more generally the societal role of the enterprise.
The transaction phases are therefore by their nature times of tension that invite economic players to expand their usual field of analysis.
The international financial and strategic advisory firm Accuracy is expanding its team in Spain with the integration of Enrique Reina as a new partner, specialised in the banking sector.
In this edition of the Economic Brief, we focus on what 2023 may have in store from three different perspectives: that of chief economists, that of CEOs and that of the markets. We will see that the views are not necessarily consistent between them, with the macroeconomic context making any projection exercise difficult to complete.
The continued swift development of high-tech access to the information highway is a sustained boom for businesses and individuals. It has also proved to be a rich field for forms of white-collar crime. This article explores the anomalies surrounding these new “hybrid” crimes – combining elements of cybercrimes and white-collar crimes – and their impact.
While most of us were celebrating the holiday season, criminals were planning their next wave of cybercrimes and white-collar. So what should the potential victims be planning to do? Buckle up! The menace of these crimes will continue to increase business costs and potentially disrupt services.
WHITE-COLLAR CYBERCRIME: A VERY BRIEF HISTORY
White-collar crime is an illegal or unethical act that violates an individual or company’s guardian responsibility of public trust, usually during legitimate occupational activity1. The term was introduced in 1939 by the sociologist and criminologist Edwin Sutherland at his presidential address to the American Sociological Association. He used the phrase to describe types of crimes commonly committed by “persons of respectability”; in other words, individuals with high social standing2. Today, white-collar crime is generally understood to be non-violent crimes that are financially motivated and committed by professional workers in connection with their work.3
Edwin Sutherland did not mention cybercrime. Why? The term Cybernetics was not in the available lexicon at the time. Still, it took several more decades after he published his work for criminals to take advantage of advances in information technology to find new ways to commit crimes using new technologies, namely computers and the Internet.4 Now, computers are a standard part of office furniture in most businesses. The potential to use those computers and the Internet to commit crimes are ever-present.
As technology and cybercrimes continue to develop, the scope of cybercrimes and white-collar crimes has increasingly overlapped; this merging is often called “white-collar cybercrime” and presents a growing worldwide trend5.
THE OVERLAP BETWEEN WHITE COLLAR AND CYBERCRIMES
Given this overlap between white-collar crimes and cybercrimes, one might ask what characteristics mark these crimes and, drawing from those characteristics, what sorts of strategies could help companies or individuals prevent these types of crimes.6
The significant difference between cybercrimes and white-collar relates to trust. At its core, many white-collar crimes involve offences based on violations of trust, including delegated or indirect trust in, for example, the medical, business, financial, or legal professions, among others. The role of trust in cybercrime is not the same. We do not trust others with our data, or that others will not spy on us. That is why we use multiple alphanumeric passwords and multi-factor authentication and protect our data through anti-virus protection, security applications and encryption.
Regarding similarities, criminals associated with cybercrimes and white-collar crimes focus on cheating and smokescreen rather than the application of force. The crimes often involve complex schemes, cover-ups, and the use of the latest technology. Many crimes are challenging to detect because losses may not be immediately apparent to victims, with the consequences not surfacing until after the crime has occurred. Despite this, history shows us that there are often red flags that can highlight indicators of wrongdoing: employees not taking vacations, fake vendors, missing documents and inventory shortages, to name a few.
The primary point is that there is a significant and growing overlap between cybercrime and white-collar crime. Current white-collar cybercrimes include fraud, extortion, identity theft, money laundering, counterfeiting currency, property and mortgage scams, and public corruption (although such crimes do not necessarily require a cyber component).
LOOKING FORWARD
What will these white-collar cybercrimes look like moving forward?7 For example, the white-collar crime of cheque fraud has been widespread for decades.
The modern-style cheque was introduced in England in the 17th century, and by the 18th century, cheques had become a widely-used and accepted form of payment. The first cheque was produced in England; its use quickly spread to other countries and has since become a widely-used form of payment worldwide8.
As for the first recorded instance of cheque fraud, it is difficult to determine an exact date. However, as soon as cheques became widely used, they also became a target for fraudsters. Historically, cheque fraud has taken many forms, from counterfeiting and forgery to theft and exploitation of loopholes in the banking system. As cheque usage and technology have evolved, so have the methods of cheque fraud, but are such crimes and other scams being replaced with hybrid white-collar cybercrimes that use stolen data that criminals share?
Notably, we are witnessing explosive growth in new criminal marketplaces dedicated to advertising and selling victims’ data, likely leading to a proliferation of white-collar cybercrime.9 The marketplaces are where criminals can share intelligence, expertise, and illegal resources. They exist in both the deep10 and dark11 web. On the deep web, criminal marketplaces sell illicit goods and services, such as stolen credit card information, counterfeit money, illegal drugs, and more. These marketplaces often operate on the surface web and use encryption and other techniques to conceal their activities from law enforcement and the public.
In the dark web, criminal marketplaces are even more prevalent and are often used to buy and sell illegal goods and services anonymously. The anonymity offered by the dark web, combined with the difficulty of tracking and prosecuting criminal activity, has made the dark web an attractive platform for criminal organisations to operate.
While confident that the data there may be stolen, other data may come from legitimate companies that specialise in selling personally identifiable information. Such companies can collect the data because we check the box at the beginning of a website/application/tool allowing our data to be stored for third-party use, or we freely forward our CVs to companies who use it for marketing purposes. Malicious actors can obtain that same information – in many instances legally – but then use it for illicit purposes or sell it to others in data markets. Such marketplaces have lowered the barrier of entry for less experienced/technical cybercriminals12. As the global economy stutters, there is a risk that the supply of hackers-for-hire will grow, so expect a boom in white-collar cybercrime as a service.
Two thousand and twenty-three could also be the year of “deep fake”13 cybercrime, which has been on the scene for a few years, but the creation tools have become cheaper and more user-friendly14. The cybercrime threat to business is that deep fakes could increase the effectiveness of ransomware, phishing and business email compromise attacks, make identity fraud more manageable, and manipulate business reputations to cause an unfounded downfall in share value.
Will such white-collar cybercrimes ever go mainstream? One example of how such deep fakes could be used for fraud comes from the chief communications officer at the cryptocurrency exchange Binance, Patrick Hillmann, who in August 2022 found himself the victim of a new approach to spoofing – using artificial intelligence (AI) generated video. He received several online messages from individuals claiming he had met with them regarding “potential opportunities to list their assets in Binance” – something he found strange because he did not oversee Binance’s listings. Moreover, he had never met anyone messaging him15. The result is that social engineering will be easier with deep fakes.
This highlights that cybercrime and white-collar crime already present significant business costs, and consumers bear them. Especially when you consider that the global cost of cybercrime is expected to surge in the next five years, rising from $8.44 trillion in 2022 to $23.84 trillion by 202716, we can all but assume that white-collar crime is on the rise. It is also a more significant monetary problem than other types of crime. Businesses lose around five per cent of their revenue annually due to white-collar crime, and the average loss is $1.5 million per case. To make matters worse, nearly 90% of white-collar crime goes unreported17, and the Department of Justice (DOJ) has in the past been more specific about cybercrime, stating that one in seven cyber crimes get reported, which means over 85% of cybercrime is left hidden in an organisation18.
DIFFICULTIES WITH UNREPORTED CRIME
There are several reasons why it is difficult to establish accurate figures concerning unreported crimes, especially cybercrimes and white-collar crimes:
1) Underreporting: Many victims of cybercrimes and white-collar crimes do not report these incidents to the authorities because they are unaware of the crime or fear retaliation or embarrassment.
2) Complexity: These types of crimes often involve complex financial transactions or the use of advanced technology, making them difficult to detect and investigate.
3) Lack of awareness: Many victims of cybercrimes and white-collar crimes may not be aware that they have been victimised or may not understand the full extent of the damage done.
4) Difficulty in proving: Proving a cybercrime or white-collar crime can be difficult, as the evidence may be intangible or difficult to gather. In addition, these crimes often involve sophisticated techniques, making it difficult for law enforcement to detect and prosecute them.
5) Lack of resources: Law enforcement agencies and other organisations may lack the resources or expertise to investigate and prosecute cybercrimes and white-collar crimes effectively.
These factors can lead to significant underreporting of these crimes, making it difficult to establish accurate figures and develop effective strategies for combating them. However, many governments and organisations are working to improve their ability to detect, investigate, and prosecute cybercrimes and white-collar crimes and raise awareness of these crimes among the public.
WHAT COMPANIES SHOULD BE DOING
As companies face growing regulatory, share and stakeholder scrutiny, they must establish strategies and investigate allegations of corporate wrongdoing early. Allegations of white-collar cybercrime can arise from external or internal auditors, whistle-blowers, regulators, or the media. Businesses, and their legal advisors, may find it necessary to conduct an investigation and take appropriate action to limit their exposure to formal proceedings.
Identifying the parallels and variances between the crimes and the requisite implications to keep up with the criminals must happen. For example, identifying white-collar cybercrime patterns will highlight appropriate prevention and responses for these offences. Such understanding is needed to determine whether we respond with white-collar, cybercrime, or hybrid strategies.
As a final point, to distinguish white-collar cybercrimes from other types of crime, do we need to establish public and private sector partnerships? That, in turn, will enable us to improve the current understanding of white-collar cybercrime; plus how technology will continue to shape crime in the office and cyberspace.
4 The first cybercrime could be said to have occurred in France before the Internet came into existence in 1834. Those responsible stole financial market information by accessing the French telegraph system. Arctic Wolf, “A Brief History of Cybercrime”, November 2022, https://arcticwolf.com/resources/blog/decade-of-cybercrime/
6 However, it is safe to say that certain cybercrimes are not white-collar crimes, including cyberterrorism, cyberwarfare, and cybersex trafficking. Conversely, certain types of white-collar crimes may not (necessarily) be cybercrimes, including health care fraud, securities and commodities fraud, and mortgage fraud.
10 The deep web refers to parts of the internet that are not indexed by search engines and are not easily accessible to the general public. This includes things like password-protected databases, online banking systems, and other content that is not intended to be publicly accessible.
11 The dark web, is a hidden network of websites that can only be accessed using special software, such as the Tor browser. These websites are often used for illegal activities, such as the sale of illegal goods and services.
12 Jeff White, “Ransomware as a Service: Criminal ‘Entrepreneurs’ Evolve Ransomware” paloalto, October 2021,
13 Deepfakes create a false story originating from trusted sources. The two main threats are spreading disinformation to influence opinion towards a desired affect, such as a particular election outcome, and against individuals or companies to obtain a financial return.
This article examines the meaning of the term ‘cybercrime’ and what drives the criminals responsible. It describes, in layperson’s terms, some of the more prolific attacks that companies and individuals may suffer and highlights the importance of preparing to respond and investigate effectively.
Cybercrime – old crimes, new tools? This phrase coined in 2001 by the UK National Hi-Tech Crime Unit1 is a play on the saying ‘old wine in a new bottle’. It covers any criminal act defined by law or civil wrongdoing perpetrated using a computer or any electronic device, on or off the internet, such as theft, fraud and blackmail. Today we have ‘new crimes, new tools’, which can only be committed by or rely upon the availability of a computer or other technology, including the internet. This includes malware and virus attacks, denial of service attacks, hacking offences and identity theft. Hence, we have the term ‘cybercrime’.
According to the US Department of Justice, all cybercrime can be organised into three categories:
• Crimes that use computers as a weapon – hacker attacks
• Crimes that target a computer or another device to gain access to a network
• Crimes where a computer is neither the facilitator nor the target but still plays an integral part in storing illegally obtained proprietary data.2
Relevant statistics are escalating, both in scale and in complexity.3 Cybercrime can affect everyone, from essential services, such as a French hospital in August 2022,4 to multinational businesses and private individuals. It causes significant network downtime, financial loss and reputational damage.5 The rise is confirmed by figures released in August 2022 showing that 25,841 people used the Dubai Police e-crime platform on its website in 2021 to report cybercrimes.6 The 2021 research figures from the UK technology website Comparitech support these statistics because the UAE saw a 255% rise in cybercrime reporting and a loss of $746 million.7
In our post-pandemic digital age, most companies conduct their business online using systems, networks, software and applications that instantly connect them anywhere. While this connectivity enables business operations globally, the increased use of connected platforms presents more significant risks than before. As technology grows and our use of it advances, so does cybercrime and the tactics criminals deploy to exploit exposed security systems and the vulnerability of users.
A key consequence of cybercrime is monetary. Cybercrime can embrace various forms of profit-driven criminal activity,
including ransomware attacks, email and internet fraud, and identity fraud, as well as endeavours to misuse financial accounts, credit cards or other payment card information.
Although many law enforcement agencies worldwide have started cracking down on cybercrime, the increasing trend shows no sign of decline, especially in ransomware8 and business email compromises (BEC).9 Some cybercriminals now reside in countries with weak cybercrime laws. In addition, they have switched from using dollars to cryptocurrencies to avoid prosecution or having their illicit funds seized.
As cited above, there are numerous diverse types of cybercrime. Most cybercrimes are carried out with the anticipation of financial gain by the perpetrators. Some common varieties of cybercrime include the following :
Ransomware
This malicious software locks the victim from their computer or blocks access to the files stored on their hard drive. Since 2013, ransomware has cost businesses and institutions billions of dollars in lost revenue. Criminals will demand bitcoin or other crypto payments to unlock the computer system.
While the significance of cyber security grows, many are still unaware of measures they can implement and actions they can take to mitigate risk, combat cybercrime and protect their future. The reality is that the best way for businesses to protect themselves against cybercrime is to invest in cyber security and understand how new tools can combat new and old crimes.
Business email compromise
BEC is a form of email fraud where employees with access to company finances are tricked into making money transfers or sharing information by email pretending to be from the CEO or a trusted customer. This cybercrime can cause substantial financial damage to a company. Investigators and lawyers must work closely with businesses to quickly assess the situation and provide solutions.
Phishing
This type of cybercrime is prevalent. Email fraud is the intentional deception made for personal gain or damage to another individual through email. Almost as soon as email became widely used, it began to be used as a means to defraud people. Email solicitations to purchase goods or services may be instances of attempted fraud. The fraudulent offer typically features a popular item or service at a drastically reduced price – too good to be true…
Identity theft
A cybercrime often occurs when an attacker accesses a computer to gather a user’s personal information. They then use the information to steal that person’s identity or access their personally identifiable information (PII). Cybercriminals typically buy and sell PII on Darknet10 markets or use it to facilitate a cybercrime or another intelligence-gathering enterprise.
With the increasing number of online criminal activities, new cybercrime models can be found in the tech news almost daily. Computers and the internet have fundamentally changed how we interconnect, participate in business, act and work with the rest of the world. The benefits of these technologies are substantial, though they also create a range of threats.
The precise effect and cost of cybercrime on companies are challenging to assess. While financial losses due to cybercrime can be noteworthy, businesses can also suffer other collateral consequences as a result of cybercrime, including but not limited to the following:
• Anyone can fall victim to cybercrime, and the fact that we are all virtually connected puts us at an even greater risk.
• Harm to stakeholder perception after a cybercrime can cause a company’s worth to decline.
• In addition to a drop in share price, businesses that have been victims of cybercrime, particularly when avoidable, may also face higher borrowing charges and greater scrutiny when attempting to raise capital.
• Loss of confidential and sensitive customer data can result in fines and penalties for companies that have failed to protect their customers’ data.
• Damaged brand identity and loss of reputation after a cybercrime dents customers’ trust in a company and that company’s capability to keep their data safe. Following a cybercrime, companies lose existing customers and the capacity to gain new customers is often reduced.
• Companies could also sustain direct outlays from cybercrime, including amplified insurance premium expenditure. Bringing in cybercrime experts to conduct proactive and reactive incident response and remediation, public relation preparation and other services related to cybercrime could make them a target of opportunity and not a target of choice.
Considering the economic sums and insignificant totals recovered from these cases, it is not difficult to see why criminals are surveying the cybercrime marketplace. With such vast gains and reduced risk, it is often advantageous to become a cybercriminal, resulting in a lose-lose situation for the victims.
In conclusion, since multiple forms of cybercrime have been observed for more than 25 years11, it is unlikely that they will ever cease due to legislative, policing or jurisdictional efforts. Cybercriminals are like businesspeople – they want to make money. Therefore, is there greater value in considering how their tools and platforms can be controlled or monitored? Or, due to the rise of such cybercrimes happening regularly, should awareness training and campaigns be increased to educate more and more people to be safe from cybercrime? Lt Gen Dhahi Khalfan Tamim of Dubai Police recently stated: ‘We need to prepare and qualify a generation that is capable of not only detecting these crimes and presenting undisputed evidence but also skilful enough to stop these crimes before they happen.’12
Notes
1 https://www.cyber-rights.org/documents/ncis_1801.htm 2 https://softwarelab.org/what-is-cybercrime/ 3 https://earthweb.com/cybercrime-statistics/ 4 https://www.france24.com/en/europe/20220823-cyber-attackers-disrupt-services-at-french-hospital-demand-10-million-ransom 5 https://ir.mcafee.com/news-releases/news-release-details/new-mcafee-report-estimates-global-cybercrime-losses-exceed-1 6 https://www.thenationalnews.com/uae/2022/08/24/more-than-25000-cybercrimes-reported-last-year-say-dubai-police/ 7 https://www.comparitech.com/blog/vpn-privacy/cybercrime-cost/ 8 https://www.expresscomputer.in/news/why-are-ransomware-attacks-increasing-these-days/88768/ 9 https://threatpost.com/fbi-bec-43b/179539/ 10 Darknet, a computer network with restricted access that is used chiefly for illegal peer-to-peer file sharing 11 https://cybersecurityventures.com/the-history-of-cybercrime-and-cybersecurity-1940-2020/ 12 https://www.thenationalnews.com/uae/2022/10/27/dubai-security-chief-warns-of-need-to-tackle-cyber-crime
Accuracy, the international independent advisory firm has appointed Christy Howard as a new partner in its London office. Christy will strengthen Accuracy’s growing transactions & investments advisory practice, which advises on M&A deals.
Accuracy, the international independent advisory firm, has promoted four of its directors to partners. This brings the total number of Accuracy partners to 62, spread across 13 countries.
The year 2022 has been a difficult one for many, full of economic events and shocks that have not gone without leaving their mark. However, in this last edition of the Economic Brief in 2022, instead of recapitulating these main events, we will focus on three areas that have not necessarily been the centre of attention in the news: the state of globalisation today, how profits differ between the US and Europe and what the financial cycle holds next for us.
Accuracy has been involved in the merger of BECM Germany with Targobank, both major entities operating in Germany. This transaction aims to strengthen the international expansion of the group.
Accuracy advised Gi Group Holding in the context of its acquisition of Eupro Holding AG, the holding company of a group of leading Swiss-based companies (“Eupro”) focused on the recruitment and human resources industry.
Small and medium-sized enterprises (SMEs) contribute significantly to global economies, in both advanced and emerging markets. For instance, in the European Union (EU), SMEs represent 99% of all businesses, employ two thirds of the work force and account for more than half of the region’s GDP. These numbers can be even higher in emerging markets.
In addition to these impressive numbers, SMEs, as they are generally run by entrepreneurs, contribute significantly to innovation and shaping future economies. They are present in all industries where barriers to entry can be relatively low and do not require a large group of people at the beginning. Recently, the SMEs that have drawn the most attention are Fintech firms.
Historically, SMEs face significantly more difficulties in obtaining bank loans when compared with their larger counterparts. They have to rely more on internal funding or funding from family and friends. This is because (1) SMEs generally possess higher default risk than individuals or large corporates due to their higher failure rate and (2) there is generally a lack of quality data for banks to assess the creditworthiness of SMEs to support their lending decisions.
However, thanks to Fintechs and government support, the availability of both higher quality traditional data and alternative data is now growing. Banks are therefore reconsidering SME lending. In this paper, we will focus on discussing SME credit scoring, as a critical tool for SME lending.
KEY TRENDS IN SME LENDING MARKET
SMEs account for the vast majority of business establishments globally, and most of them seek loans from banks for business development. They represent a unique segment in the lending market, as they require very different services from retail customers or large corporations. For instance, an SME may not have a large finance team to provide comprehensive financial records and business data for banks to make credit decisions. It may not have the time or the resources to go through a long loan application process. As a result, they have become a tough nut to crack and therefore a somewhat underserved segment.
However, in an era of technology enablement and financial inclusion, things are changing fast. With technology that enables fast customer onboarding and screening and the availability of alternative data for risk assessment and monitoring, SMEs are coming into the spotlight.
We have observed five key trends in the SME lending market, including changing SME business models, faster banking processes, greater regulatory support on Fintech adoption, rising competition from newcomers and growing service offerings.
Figure 1: Key trends in SME lending market
Source: Accuracy
Changing SME business models
SMEs across various industries are being forced to alter their business models drastically in order to stay in business. As technology adoption and digitisation have become global trends, there is no exception for SMEs. The development of new business models (i.e. asset-light, online-based) makes it difficult for banks to assess their credit quality from a traditional perspective.
Faster banking processes
With the help of technology and alternative data, Fintechs and some traditional banks are now able to process SME loan applications much more quickly. For instance, Liberis, a UK Fintech that provides finance for small businesses, can interface with a company’s bank account and dashboards to enable access to instant funding, which may be retrieved in a matter of minutes for SMEs that need it.
Regulatory support on Fintech
Financial technologies have driven global innovation in financial services. At the same time, they are altering the nature of commerce and end-user expectations for financial services. Regulatory bodies are increasingly open to innovations and supportive of the adoption of Fintech solutions. As such, we have seen a number of Fintech companies specifically targeting SME finance in the past few years.
Rising competition from newcomers
Alternative lending providers such as tech giants are entering the battlefield. Big players like Google, Amazon and Tencent, as well as their more regional counterparts, have been putting pressure on banks for some time. There is a good probability that this pressure will increase as Techfins increasingly use their potent consumer franchises and advanced digital capabilities to outbid banks, especially in SME lending. Competition in the field will be increasingly intense.
Increase in service offerings
The SME banking sector has transformed as a result of changes brought by Fintechs, Techfins, government and regulatory support, and challenger banks. SME clients now have more options than ever to obtain access to financing. Banks must modify their SME offerings to compete in an environment where SMEs are looking for a suite of services (e.g. invoicing, corporate credit cards, payroll management) and one-stop-shop experiences.
TREDING THEMES IN SME CREDIT SCORING
To tap into and expand their business in SME lending, banks needs a series of tools, ranging from fast customer onboarding platforms to accurate risk assessment and monitoring tools. In this whitepaper, we dedicate our discussion to the use of credit scoring for risk assessment and customer acquisition.
Credit scoring is a statistical method for determining a borrower’s creditworthiness by combining a number of risk factors into a single score. We have observed several trends in SME credit scoring: (1) the use of alternative data and the adoption of data sharing platforms; (2) the adoption of advanced analytic solutions; and (3) the streamlining and automation of credit approval processes.
Figure 2: Key trends in SME credit scoring
Source: Accuracy
Alternative data and data sharing platforms
Traditionally, banks use a limited set of data to perform credit decisioning. These can be grouped into financial variables and non-financial variables. In recent years, the banking industry has seen a rise in the use of alternative data, which adds value throughout the customer banking lifecycle, particularly during the credit evaluation process. As the value of alternative data has gradually attracted more attention from banks and regulators, so has the concept of Open Banking and data sharing platforms. For example, the Commercial Data Interchange platform of the Hong Kong Monetary Authority (HKMA) aims to connect numerous SME data owners and providers with financial institutions for easier, faster and better credit assessment.
Figure 3: Types of credit scoring data
Source: Accuracy
Alternative data
Up until now, the majority of banks still assess potential borrowers’ creditworthiness using traditional credit data and methods. However, traditional data only captures the tip of the iceberg when it comes to the borrower’s information. The use of alternative data presents two attractive opportunities for banks. Firstly, alternative data help banks enhance model performance. Secondly, they help banks to expand the total addressable market (TAM) as the data make sound credit assessments possible. Meanwhile, the growth in computational power has effectively lifted the barrier of collecting and processing big data.
Alternative data are generated everywhere in the digital footprint of a company. The following figure describes typical examples of alternative data.
Figure 4: Alternative data providers and sources
Source: Accuracy
Data Sharing and Open Data
Governments and industries across the world are promoting the concept of open data and data sharing. The idea is to facilitate data transmission among various stakeholders to enhance overall efficiency. In some cases, banks are being urged to exchange customer data in a machine-readable format so that customers can access and securely transmit their banking information to reliable parties. This makes it easier for borrowers, especially SMEs, to switch financial service providers seamlessly, increasing availability of funding sources and opportunities.
One recent example of the use of alternative data in SME financing is the launch of the Commercial Data Interchange (CDI), a core pillar of HKMA’s “Fintech 2025” strategy. The HKMA officially launched the CDI in October 2022, with a proof-of-concept study dating back to November 2020. During the pilot launch phase, the CDI facilitated over HKD 1.6 billion of SME loans or 800+ loan cases. This initiative aimed to enable more efficient financial intermediation in the banking system and to facilitate the innovative use of commercial data to enhance financial services.
The CDI connects five types of stakeholder, namely, data owners (i.e. SMEs), data consumers (i.e. financial institutions), analytics service providers, solutions providers and data providers (i.e. commercial entities that collect the digital footprint of data owners). With the CDI, each bank and data provider has connections to the platform, making it simple for them to link their systems to the infrastructure for data access. This allows SMEs to share their digital footprints with their banks. The data help banks in a number of ways, including KYC, credit underwriting, product development, customer acquisition and credit monitoring. As we have highlighted, the use of alternative data is especially important for credit decisioning.
Figure 5: HKMA CDI initiative
Source: HKMA CDI, Accuracy
Hong Kong is not alone. The Data Governance Act, as approved by the European Parliament in April 2022 and applicable from September 2023, aims to increase data sharing in the EU so that businesses and start-ups can access more data. The regulations will allow greater use of data gathered in various public sector domains. They also enable the construction of shared European data platforms in various fields, including finance.
Additionally, policymakers in India, Japan, Singapore and South Korea are proposing a number of initiatives to encourage and accelerate the adoption of data sharing frameworks in the banking industry. For instance, the Monetary Authority of Singapore and the Association of Banks in Singapore have released an API playbook to promote data interchange and communication between banks and Fintechs.
Advanced analytics solutions
Banks are searching for better credit scoring techniques to improve the predictive power of their models. For example, they are developing or considering AI and machine learning for their credit scoring. These methodologies are more sensitive to real-time indicators of an SME’s creditworthiness than traditional credit scoring methods.
Decision tree and random forest are among the most commonly considered machine learning techniques that can be applied in SME credit scoring. Researchers are also exploring new solutions such as hybrid BWM and TOPSIS, when facing issues of insufficient data.[1] A detailed discussion on various advanced analytics solutions can be found in the next part of this paper.
Leveraging the enormous amounts of data gathered, Shopify, a leading all-in-one e-commerce platform that powers millions of businesses globally, has become a leader in using machine-learning techniques. Not surprisingly, it has launched Shopify Capital, a data-powered product that enables merchants to secure funding and accelerate their business growth. According to the company, Shopify has constructed Shopify Capital using a version of a recurrent neural network (RNN) that analyses more than 70 million data points across the Shopify platform to understand trends in merchants’ growth potential and provide cash advances that match their business needs. Since its inception, Shopify Capital has provided over USD 3.8 billion in funding.
Process streamlining & automation
In this digital era, another rising trend is the demand for seamless and automated credit approval processes. This trend is prominent in all phases of credit application, from client interactions to data collection (e.g. use of API, open banking, the CDI initiative), credit decision-making (automated models) and result communication (workflow streamlining and integration). Automation’s ultimate goal is to speed up the banking services for clients while reducing decision-making time, saving money, and improving productivity and efficiency for banks. In some cases, SME loan processing is part of the broader banking CRM suite, making it easier for banks to manage the whole customer lifecycle digitally.
Let’s take NeoGrowth, for example. NeoGrowth is a pioneer in SME lending in India, with a unique underwriting model based on digital transactions. The company has used technology to provide consumers with a smooth and seamless digital experience, where the entire process flow – from lead generation to loan origination, approval, disbursement and collections – is handled digitally.
[1] BWM – best worst method; TOPSIS – technique for order of preference by similarity to ideal solution
Figure 6: Digitally integrated operations in SME loan lifecycle
Source: NeoGrowth annual report, Accuracy
SME CREDIT SCORING APPROACHES
SME credit scoring refers to risk models to help financial institutions gauge SMEs’ creditworthiness and risk level. Traditionally, most banks use credit scorecards developed based on logistic regression for its simplicity to use and interpret. However, as we mentioned above, with the rapid development of data and analytics fields, some banks have started to adopt more advanced and dynamic models. In particular, decision models based on machine-learning techniques (e.g. decision tree, random forests) have gained popularity in recent years.
Figure 7: SME credit scoring flow
Source: Accuracy
Logistic regression-based credit scorecards
Credit scorecards developed by a combination of weight of evidence (WoE) transformation and logistic regression are among the most commonly used credit decision tools in banks. These credit scorecards were widely used in the past few decades; they have been well tested and have proved their effectiveness. Today, they are still the most popular scorecards used and maintained by banks, thanks to their simplicity to use and explain, while remaining effective.
There are seven important steps in developing a logistic-regression-based SME credit scorecard, including data processing, variable transformation and selection, logistic regression, performance inference, segmentation analysis, scorecard scaling and scorecard validation. In contrast to retail credit scorecards, segmentation analysis is usually performed for SME scorecards as companies in different sectors may exhibit very different risk characteristics. A detailed discussion on retail credit scorecards can be found in our previous whitepaper – FINANCIAL SERVICES & BANKING: RETAIL BANKING TRANSFORMATION – CREDIT SCORING.
Figure 8: major steps in SME credit scorecard development
Source: Accuracy
Decision tree
Machine learning now makes a substantial contribution to making credit decisions thanks to the rapid growth in data availability and computer capacity. One of the most popular supervised learning techniques is tree-based machine learning.
A decision tree is made up of two parts: branches and nodes that use various features from a dataset at each node to recursively partition a training sample. The algorithm iterates through all conceivable binary splits in search of the feature and related cut-off value that best distinguishes one side as having predominantly higher credit quality and the other as having relatively low credit quality. As an example, we can build a decision tree as below:
Figure 9: Illustrative decision tree
Source: Accuracy
• The most crucial factor is the interest-expense-to-sales ratio. The sample model decides that when data in the root node is divided into instances with the ratio < 2.5% and those with the ratio ≥ 2.5%, the figure of merit is optimised.
• Then, until it reaches the leaf node where the stopping requirement is met, this process is repeated for each new daughter node, i.e. loan size, working-capital-to-debt ratio, firm age, and cash to sale in this case.
• Finally, it provides the probability of default for each leaf node, of which the threshold is 0.2 in this case. Any borrowers that fall within the <0.2 criterion shall be granted the loan, while the others are rejected. This threshold is at the discretion of the lender.
Random forest
A random forest combines many different decision trees to get a prediction that is more precise and reliable. When compared with a single decision tree, a random forest avoids overfitting concerns, especially when there are enough trees in the forest.
To improve performance, numerous decision trees should be created in a random forest. The distinctness of each decision tree in the random forest is ensured by the random selection of data subsets and features. Overfitting is prevented since the model outcome is based on the combined predictions from each individual decision tree model. However, it is crucial to note that the individual decision tree models should not correlate highly with one another in this situation. The random forest approach does not make any assumptions about the data or its distribution, unlike many other algorithms (such as linear regression, SVM, etc.). Consequently, it typically only needs minor data transformations. As the random forest technique uses random feature subsets, it can work well with high-dimensional datasets (a dataset with a large number of features).
The random forest is especially effective compared with other models under the following circumstances:
• When there are outliers in the dataset, the random forest technique is unaffected by them.
• Many algorithms may take noise in the dataset as patterns (or extra manual power is required to remove outliers); however, the bagging method employed in random forest ensures that the noise in the dataset is not mistaken as signals or patterns.
• The random forest includes efficient methods to estimate missing values and preserve accuracy when there are missing values in the dataset, even when a sizable fraction of the data is missing.
Figure 10: Illustrative random forest structure
Source: Accuracy
Hybrid BWM and TOPSIS method
In addition to logistic regression and machine-learning techniques, an alternative method to develop an SME credit-scoring model is the hybrid BWM and TOPSIS.
• BWM is a decision-making logic tool that requires fewer data and less effort in development. It aims to find the optimal weights by minimising the gaps between actual weights and business judgement. Specifically, this process can be described as a linear model.
• TOPSIS is a tactic to determine an SME applicant’s relative position in contrast to a pool of borrowers. It finds the relative rank by calculating the weighted normalised matrix, obtaining the positive and negative ideal solutions, computing the Euclidean distance of the applicant between the positive and negative solutions and computing the relative closeness to the ideal solution.
This hybrid BWM and TOPSIS method requires judgement that is more subjective, but it offers an unparalleled performance in terms of cost, ease of development and implementation, and flexibility.
Figure 11: Illustrative steps in BWM and TOPSIS method
Source: Accuracy
Comparison of different modelling approaches
Overall, different modelling techniques, including but not limited to the those stated above, can be used to facilitate credit decisioning. These methodologies have their relative strengths and weaknesses, which we summarise in the below table.
Figure 12: Comparison of different modelling approaches
Source: Accuracy
FINAL REMARKS
SMEs make up a sizeable portion of the economies of both developed and emerging markets. Providing access to financing for small businesses has been a challenging task due to a variety of factors, including the expense and difficulty involved in determining the creditworthiness of small businesses with a lack of sufficient quality data.
However, banks are now able to lower the costs of originating and underwriting loans to SMEs while also increasing the performance of their SME loan portfolios, leveraging alternative data and new modelling techniques. These developments have led to an overall increase in the financing accessible to SMEs, and in time, will drive employment and economic growth.
At Accuracy, we have created our own SME rating model, Accur’Rating®. We originally developed the rating model using logistic regression but have recently migrated it to be a random forest model. We use this model to help clients evaluate investments in private debt, understand the credit quality of various corporates, etc.
We are in an era in which SMEs will become even more important for global economic development and driving innovation. With the right incentives, technology and knowledge, now is the time for banks to tap into and expand SME banking.
For our sixth edition of Accuracy Talks Straight, Jean Barrère presents the editorial, before letting Romain Proglio introduce us to Wintics, the specialist in intelligent video analysis for mobility operators. We will then analyse data in China with Frédéric Recordon and Helena Javitte. Sophie Chassat, Philosopher and partner at Wemean, will ask the question of whether data has a conscience or not. Then, we will evaluate our data assets with Isabelle Comyn-Wattiau, Professor at ESSEC Business School, holder of the Information Strategy and Governance Chair. Finally, we will focus on the dual transition to energy and digital with Hervé Goulletquer, our senior economic adviser.
As Victor Hugo did for the toilers of the sea, we must start with a homage to all toilers of data.
Observe the chief data officer setting out the fundamental
difference between ‘raw data’, ‘information’ and ‘knowledge’ and reminding
us of the oh-so complex nature of switching from one category to another.
Watch the CIO mobilising exponential technologies
through connected platforms to capitalise on the organisation’s digital and
information assets more quickly.
Pause for a moment to appreciate the grace of a
Baudelairean gesture. The data scientist isbinfusing data with art: ‘You
gave me your mud and I have turned it to gold’, he proclaims!
Pick up speed again with the decision-maker, on the
lookout for some form of informational advantage. Embark on a trip with the CEO
over rough seas, taking the organisation on a path to difficult data-driven
transformations!
The HR director might be in charge of creating
dedicated paths to attract and retain these rare profiles, but the head of
finance is more interested in the multiple forms of data value: economic, financial,
utility, market, exchange… How can we assess this intangible asset?
Time to move on and applaud! At the front of the stage, the politician
sets limits to all things digital and tidies the mess made by the use of our
private data!
Make room for thought. Behind the curtain, the philosopher disturbs the order of our digital lives and challenges the Data-Being. Is binary now the language of truth? Is it possible to translate all human experience into 0s and 1s?
When a topic as multifaceted as data mobilises so many profiles and so much knowledge, capital and liquidity, intelligence and technique, material to argue for and against… when this dialectic gives rise to so much wealth and so many new forms of living together, it is because there lies, at its heart, an essential debate that must be brought to life.
Created at the end of 2017 by its three founders, and armed
with four years of research and development, Wintics positions itself as the specialist in
intelligent video analysis for mobility operators. The company markets its
analytical products to four types of mobility infrastructure operator: regional
public authorities, public transport operators, airports and ports.
For regional public authorities, the start-up has developed
a particularly innovative artificial intelligence software solution
(called Cityvision), which can connect automatically to any camera,
whether optical or thermal, old or new, in order to extract large amounts
of data on mobility, the safety of public spaces and urban cleanliness. For
example, the software is able to analyse cycle path traffic and use in order to
help the city to organise its mobility.
The solution also enables its clients to manage
their infrastructure in real time, for example, by transferring the data
collected and analysed by Wintics to the trafficlight system, helping to
improve the flow of traffic in a highly targeted way.
For transport operators, Wintics provides the
opportunity to visualise in real time movement flows and the level of passenger
traffic. Airport operators are able, for example, to supervise the various
passenger flows arriving on site and to facilitate their movement around the
airport thanks to the realtime management of queues at check-in desks and
passport control.
Wintics is positioning itself as an innovative and strategic solution to make cities greener by prioritising the development of soft mobility, the attractiveness of public transport and the improvement of travel flows. The camera has become a management tool for an efficient and safer urban environment.
Wintics is an entirely French company that proposes a solution 100% made in France. It won the 2018 and 2019 editions of the Paris Grand Prize for Innovation, was certified by the Greentech Innovation label and in 2020 joined the best artificial intelligence start-ups in Europe in mobility. Together, the Wintics experts (around 15 today) have already completed various projects in over 30 French cities.
Data, one last reason to take an interest in China?
Why should we still take an interest in China? The signals it is sending of a country
locked up, tempted to turn in on itself and asserting an alternative model of
society are now leading people to understand it in terms of risk analysis. The
latest studies from the European and US chambers of commerce in China testify
to a significant re-evaluation in the strategies of foreign companies.1
Yet, in this sombre context, for those of us who have
been working in the country for over 10 years, China is a country that deserves
attention from Europe. The most relevant reasons to take an interest might not be those that
come to mind first, however. Some may even prove discomfiting. What if China
was ahead of the West? Ahead in the thinking that is shaping the world of
tomorrow? In the absence of a commercial El Dorado… ideas!
The source of this Chinese head start is data. The country has numerous advantages: structural
– 18% of the global population provides an unparalleled testing ground to
explore new ideas; economic – its regulation or even the abundance of
tech investments; cultural – the launch of quick & dirty solutions,
which are later improved or abandoned, where Westerners only want to launch
more finished products.
This article aims to analyse through three different
lenses how China considers data. (1) How regulation turns data into a competitive advantage. (2) How data
is at the heart of retail transformation. (3) How it uses data to create new
business models.
1. REGULATION FAVOURING COMPETITIVE ADVANTAGE
The first thoughts on data as a factor of production
started in China at the beginning of the 2000s and continued throughout the
following decade with the creation of a regulatory framework to launch a data
exchange platform.
The turning point came in April 2020 when data was
officially considered as the fifth factor of production, on the same level as
capital, labour, property and technology.
This is effectively the birth certificate of a data
economy considered as the disruptive accelerator for the growth of Chinese
companies.
The first objective of public authorities is to
encourage players to structure their data in such a way as to facilitate their sharing.
For this, the government has put in place public platforms.
From 2019, the SASAC, the governmental body
that supervises state-owned companies, published a list of 28 state-owned and
private companies tasked with federating their industries through sectoral
platforms.
The China Aerospace Science & Industry Corp. is in charge of aeronautics; the CSSC,
of naval construction; Haier, via its COSMOPLAT platform, of 15
different sectors (electronic, industrial manufacturing, textile, chemical
industry, etc.).
The second objective aims to create a data exchange platform.
Led by local
authorities (Shanghai, Beijing, Shenzhen, Hainan, Guangzhou), this takes the
form of freetrade zones and pilot data trading platforms.
Thus, the Shanghai Data Exchange Centre (SDEC) is similar to a technology exchange guaranteeing the legal conformity of transactions for member companies, whilst the Beijing International Big Data Exchange favours the sharing of public data at national level with the hope of international expansion.
These initiatives show that China has started to lay
the foundations of the data economy.
It is trying different things, experimenting with answers to the most crucial question: how can data be transformed into an item of value? A first challenge lies in the multitudes of data – personal, financial, industrial, meta, etc. – as well as their of ten incompatible formats.
Their standardisation and exchange protocols are
crucial stakes for leadership in the world of tomorrow. In parallel, we also
have the question of valuing data. The SDEC is currently working on
these questions of ownership, source, quality, certification and price setting.
We can see it: China has started thinking about the
new asset that data has become. It is advancing in incremental steps leveraging public and private
economic actors, thus building a gigantic world of possibilities.
2. DATA, AT THE HEART OF RETAIL TRANSFORMATION
‘Today, we don’t know how to monetise data, but we do
know that people will not live without data. Walmart generates data from its
sales, whilst we do e-commerce and logistics to acquire data. People talk to me
about GMV2 but we’re not looking for GMV. We sell purely to get
data, and that is very different to Walmart.’3
This is, in just a few words from Jack Ma, founder of Alibaba,
the fundamental difference between China and the
West:
WHEREAS WE SEE E-COMMERCE AS AN
ADDITIONAL DISTRIBUTION CHANNEL, THE CHINESE SEE IT AS A DATA MINE.
Though comparing the combined figures for Black Friday, Thanksgiving and Cyber Monday in the US ($25 billion) with the Chinese Double 11 ($139 billion)4 shows China’s significant lead, it does not take account in any way of this difference of philosophy.
The fact that China is much more connected
than the
US and Europe, that 99.6% of Chinese internet users access the internet from
their smartphones, is hiding what is most important.
Limiting ourselves to quantitative
analyses would be to misunderstand the disruptive nature of Chinese retail. The
giants of e-commerce have created innovative payment solutions leading to their
dominance of retail and their leadership in mobile payments.
This explains the dizzying growth of
retail that depends on a fundamentally different approach from traditional
players. Alibaba offers the most complete example with its concept of
New Retail, defined in 2015. Two characteristics shape this model: (1)
Alibaba positions itself above all as an intermediary facilitating exchanges
between retailers and customers. (2) Alibaba has modelled a holistic ecosystem,
each segment feeding into the others thanks to the data created by the
transactional system.
As an intermediary, Alibaba offers retailers its
digital tools in branding, traffic generation, etc. as well as its financial
services that are highly appreciated by SMEs neglected by banks.
Concerning consumers, Alibaba makes
available to them a universal plat form for all their daily needs: social relationships,
administrative operations, consumer loans, etc.
Alibaba therefore sets itself apart from its
Western equivalents. It operates an ecosystem, the purpose of which is to
produce, analyse and monetise data, whilst its Western equivalents remain, despite
their latest developments (cloud, etc.), integrated
distributors whose data is only a result. For Alibaba, retail is the support function, in no way the raison d’être. Its leadership relies less on GMV than on its central position in the generation and exploitation of data. Alibaba has come a long way since Jack Ma’s declaration on 16 June 2016 at the China Internet+ Conference 中國互聯網+峰會 that Alibaba ‘doesn’t know how to monetise its data’!
Since then, seeing huge opportunities far
beyond its current revenues, Alibaba has transformed its ecosystem and its
services. As a result of its perspective on data, China is leading the transformation
of a whole industry, potentially paving the way for its Western counterparts.
3. DATA, A SOURCE OF NEW BUSINESS MODELS
Even though the New Retail example
illustrates China’s capacity to pivot an industry from the sale of goods to
the monetisation of its data, the spectacular development of electric
vehicles highlights its ability to create innovative business models from
scratch.
This is the example of electric charging stations. An electric charging station essentially differs from a fuel station in two ways. First, the charging time encourages users to charge their vehicles at home or place of work, which translates into very low utilisation (below 5%) of charging stations located in public spaces. Second, as the price of electricity is strictly regulated, operators’ very low margins prove insufficient to generate a return on investment. The solution in China was to shift the focus from thedriver (the focal point of the fuel model) to the electric ecosystem.
In order to be successful, a Chinese operator considers itself a service platform for drivers, site providers (i.e. developers), local councils in their town policies, electricity providers, etc. It is not just about selling energy anymore but about optimising flows and prices: traffic, energy flows, etc. The critical point is, once again, data. The start-up X-Charge 智充科技, a specialist in SaaS B2B services, which our Beijing office knows well having worked with them, is illustrative of this business model revolution. It enables charging station operators to analyse their data in real time, adjust their prices by station based on the utilisation rate and road traffic, store electricity under the best conditions and sell it back to electricity providers or building managers during peak periods, etc.
The start-up has developed predictive models of
activity and revenues that are highly appreciated by operators. It comes as no
surprise that Shell Ventures invested during its Series B; beyond a financial
investment, it is a disruptive model that the major company came looking for in
China. It is quite obvious that the race to build the world of tomorrow has
started and China seems intent on establishing its leadership through
innovation guided by the state and relayed by the tech giants. In this
strategy, data is clearly considered a critical asset. It is designed to secure
the country’s future place in the world. In parallel, the monetisation of
data will generate gigantic revenues that only a few players will control
sufficiently to maximise their gains.
In some sectors, only the monetisation of data can, at
least in a transitory phase, make capital-intensive business models viable.
For all these reasons, we consider it essential to take
an interest in these topics and why not to take inspiration from certain
initiatives in China.
____________
1 The latest study is that of the Chambre de Commerce et de l’Industrie France Chine (CCIFC – the France–China chamber of commerce and industry), conducted from 2 to 14 September 2022 with 303 French companies: 79% consider a deterioration in China’s image; 62% see an impact on their profits, 58% are revising their investment strategies in China; 43% do not plan to increase their presence in the next three years; and 16% are considering reducing their presence in China. 2 GMV: Gross Merchandise Value 3Jack Ma speech from China Internet+ Conference (中国互联网+峰会) on 16 June 2016 4 2021 data, sources: Forbes, Bloomberg
Sophie Chassat Philosopher, Partner at Wemean
Zombie Data
‘Is Data conscious?’ This question, asked in relation to a character from
the series Star Trek, is taken up by the philosopher David Chalmers in his
latest book Reality +.1 Data is the name of an android. In
the episode of the series titled ‘The Measure of a Man’, a trial takes place
to determine whether Data is an intelligent and conscious being.
There is no doubt about the intelligence of the humanoid
robot: Data has the
capacity to learn, to understand and to manage new situations. However, the
question of whether Data is conscious remains unanswered. Does Data have an inner
life with perceptions, emotions and conscious thoughts? Or is Data what
philosophers call a ‘zombie’? In philosophy, a zombie is a system that, on
the outside, behaves like a conscious being but, on the inside, has no
conscious experience. It behaves intelligently, but has no inner life or reflexivity
about its actions.
Chalmers star ts with this story to question whether a digital system can be conscious or whether only humans and animals are gifted with consciousness. For this astounding Australian philosopher, a system perfectly simulating the functions of a brain could be conscious in the same way as a biological brain. This leads him to dizzying speculations: in that case, mirroring that logic, isn’t our actual consciousness just theeffect of a simulation? Don’t we already live in a metaverse, and isn’t our god a computer?
If we make the story of the rather aptly named Data an
allegory, we can use it to raise a simple ethical question when we exploit data.
What type of data are we dealing with: Zombie Data or Conscious Data? In
the first case, we harvest data that seem to behave intelligently, but
ultimately their content is empty and without interest. We have all had the
experience of trawling through masses of data for sometimes very little reward,
or even absurd results! We can add that the data transform us, too, into
zombies… Because here we are, reduced to aggregates of outer behaviours
(purchases made, keywords typed into search engines, conversations held on social
media, etc.) supposed to encapsulate our inner desires – which remain a little
more subtle, nevertheless. Zombie Data make Zombie People!
As for Conscious Data, we can be certain that Big
Data do not have the system consciousness that Chalmers deems entirely plausible
in the future. The only thing left to do then is for human outer consciousness
to give meaning to data, to humanise them. This is just like Data the android,
who needs a human friend to evolve, a role fulfilled by Captain Picard in Star
Trek. Conscious People make Conscious Data!
____________
1 David J. Chalmers, “Reality +. Virtual worlds and the problems of philosophy”, Penguin Books/Allen Lane, 2022
Isabelle Comyn-Wattiau Professor at ESSEC Business School, Chair of Information Strategy and Governance
Valuing our wealth of data, a challenge no company can escape
Evoking the value of data in 2022, when the media is overflowing
with examples of companies suffering damage linked to data, may seem
counter-intuitive. Yet, it is well known that data has value, and it is the very reason why
the attacks targeting data are not just simple cyberattacks. More and more,
they aim to seize the informational wealth of the target organisations.
Data security can be broken down into three areas: availability,
confidentiality and integrity. Attacking an information system compromises its availability, thereby endangering
the process that the system underpins. This is what we were able to observe at
the Corbeil-Essonnes hospital a few months ago. Due to a lack of available data
linked to the patient, the diagnosis and care process is made longer and more
expensive. It can even have an impact on patient health by delaying a course of
treatment. During these attacks, we also fear a confidentiality breach of
highly sensitive data.
And, if by chance the computer hackers modify these data, they could compromise their integrity. Thus, all three parts of data security are affected, with extensive damage: first, the health of the patient, but also the reputation of the hospital and the cost linked to the restoration of the information systems and all the affected processes. Limiting ourselves to the security of the data is a reductive defensive approach, even if we cannot rule it out. Determining the value of data is a significant issue for most companies. The press publishes daily success stories of start-ups where a good idea for sharing or pooling highly operational information leads to new, unsuspected value. Thus, in 2021, the market capitalisation of Facebook reached around $1 trillion, but the net value of the company based on its assets and liabilities was only $138 billion.1
The difference in terms of value can be explained by
the data that Facebook collects from its users and uses in turn to feed its
advertising algorithms. For economists, data represent unrivalled assets (in
that they can be consumed by various users without diminishing), which do not
necessarily depreciate as we use them; on the contrary, they can generate new
information, for example when combined with others. For some, however, the
value does depreciate, and very quickly at that. All these characteristics mean
that data fall into a highly specific class of asset that resembles no other
intangible asset, brand, software, patent, etc.
Tackling the value of data also requires us to agree on
the vocabulary to be used: data versus information. Without reopening the debate on the difference
between data and information, we can consider them identical in an initial
approach to the topic. Some, however, will wish to distinguish data – the input
of the system, unmodifiable, a result of measuring a phenomenon – from
information – the output of the system after cleaning, restatement, refinement,
aggregation, transformation, etc.
The value of information has been studied in line with
accounting practices notably by Moody and Walsh.2They first
endeavoured to demonstrate that information can be considered as an asset: it
offers a service and an economic advantage, it is controlled by the
organisation and it is the result of past transactions. They then proposed three
appraisal methods to value information.
The first is based on costs – of acquisition,
processing, conservation, etc. It is the easiest to put in place because these elements are more or less
present in the financial controller’s dashboard. However, these costs do not
reflect all aspects of data, for example the development of their value over
time. The second appraisal method is based on the market and consists of
determining the value that can be obtained by selling the data.
Here, we talk about an exchange value. This approach requires a considerable
effort. Moreover, it is not always possible to obtain a reliable measure of the
value of the data. Finally, the third method is based on utility.
This means appraising the use value of the data by estimating
the economic value that it can generate as a product or a catalyst. But this
value is difficult to anticipate, and estimating the share of its catalytic effect
is also highly complex.
Thus, it seems that the various approaches to
determine the value of data are partial but complementary: some are based on the use value or
exchange value of data; others assume rational corporate behaviour and assess
data at the level of investment made to acquire it and manage it throughout its
life cycle; still others are based on risk. The risk approaches see data as
the target of threats to the company or organisation. Such risks may be
operational; thus, the missing or damaged data may cause certain processes to
function poorly.
But there are also legal or regulatory risks, as more
and more texts stipulate obligations for data; the General Data Protection
Regulation is just one example, albeit the most well known, no doubt. The
risks can also be strategic when they concern the reputation of the company or
lead the company to make bad decisions.
Finally, some authors have taken an approach based on externalities
for open data, which are available to all but which, by making the most of
them, can bring a benefit to society at large.
The concept of data value is linked to the objective of
proper data governance: maximising data value by minimising the associated risks and costs.3
By adopting this three-pronged approach (value, risk and cost), we can better obtain
a holistic view of data value and improve its valuation.
These three aspects are complementary, but we must not
exclude context.
Indeed, the same information does not have the same
value depending on the temporal, geographic, economic or political context in
which the valuation process is conducted. The question of why the valuation is
needed must be answered in order to characterise the relevant contextual
elements: political, economic, social, technological, ecological and legal
(PESTEL) in particular.
The object of the valuation must itself be identified.
One of the difficulties in estimating the value of data is choosing the
appropriate level of detail: are we talking about the entirety of an
information system (e.g. the client information system) or
a set of data (e.g. the client database) or indeed a
key piece of information (e.g. the launch price of a competing product)? It
is clear that the value of an information system is not the simple sum of the
value of its components.
Few approaches for the valuation of data are sufficiently
holistic and general to enable their application to any type of data in any
context. Recommendations can be made, for example, the recommendation to choose
between a top-down approach and a bottom-up approach. But the holistic
approach can only be holistic by combining these two value paths.
It is because the company is still unable to measure
the real and potential value of data that it does not make sufficient
investment in data governance and information sharing.
It is a vicious circle that ultimately leads to the company
being unable to realise the full value of data.
A virtuous circle can be built by starting with the
most critical data, for example (but not necessarily) client data, and
gradually getting on board all data actors – producers, transformers, sellers,
distributors, consumers of these data. They have the different points of
view necessary for a holistic approach.
____________
1J. Akoka, I. Comyn-Wattiau, Evaluation de la valeur des données – Modèle et méthode, Record of the 40th Congress of INFORSID (INFormatique des ORganisations et Systèmes d’Information et de Décision), Dijon, 2022 2D. Moody, P. Walsh, Measuring the Value of Information – an Asset Valuation Approach, Record of the European Conference on Information Systems (ECIS), 1999 3World Economic Forum, Articulating Value from Data, White Paper, 2021
The twin green and digital transitions: resolve for investment and perspicacity for macroeconomic management
The world economy is facing numerous challenges. In the short term, we have an unusual rhythm in prices and a deterioration of growth prospects, taking place in complicated political environments internally in many countries and in a worrying environment internationally (actions of Russia in Ukraine, China around Taiwan and Iran with its Arab neighbours). In the long term, ageing populations are of concern in a number of regions around the globe, economic ‘regulation’ seems to be moving away from the neoliberal corpus towards a more Keynesian approach and the twin green and digital transitions are under way.
Let us pause on this last point. The green transition is essential. It is essential for the preservation of the planet and of all the species that live on it. We must ‘decarbonise’ industry and transport, succeed in the energy renovation of buildings and develop renewable energy on a large scale. The digital transition is also indispensable. It represents the continued process of enabling companies, administrations and households to incorporate new technologies (for example, the cloud, the internet of things or artificial intelligence) in many aspects of their activities. It is worth bearing in mind that the necessary transformations are not purely technological issues; there is a very significant human aspect, with cultural and behavioural adaptations to be made.
The amount of investment in play is impressive. For the eurozone alone, considering an annual envelope of €500 billion a year, for multiple years (certainly no less than 10), does not seem unreasonable. At least that is the order of magnitude determined when summarising some authoritative work on the subject. That represents more than four points of GDP!
The sums committed are so vast that questioning their macroeconomic implications would not be a futile exercise. Let us propose a simple forecast to 2032. The starting point is this resolve for investment linked to the twin transitions: the €500 billion a year, which, when changing from current currency to constant currency (the currency used when measuring the economic growth – that of GDP), becomes €440 billion. The other elements of demand, including investment spending outside of the twin transitions, remain on the same trajectory as observed over the past few years with one exception: extra investment is reflected by more imports and therefore by a reduction in external trade surplus. For this exercise, we assume that there will be no shock from prices or economic policy over the period.
THE TABLE ABOVE HIGHLIGHTS THE MAIN IMPLICATIONS TO
CONSIDER.
THREE ARE PARTICULARLY NOTEWORTHY:
• GDP growth would reach 1.5% a year. Though
this forecast exercise appears reasonable, we must admit that potential for
growth is estimated at 1% a year. Of course, we could consider that the
additional investment effort will contribute to more growth. But we could also
defend the idea that, at least in part, this new accumulation of capital would
replace the destruction of fixed assets that have become obsolete.
We must not forget demographic developments either,
which send a rather negative message about the active population (effect to be
offset perhaps by a return to a situation with close to full employment).
In any case, one suspicion remains: is the
quantification based on these assumptions too optimistic?
• The share of household consumption in GDP would
fall by 2.5 points over the period to reach 49.5%. The current level is
already not particularly high: 52% against an average of 55% between 1995 and
2010 (and a high of 59% in 1980), a period that was therefore followed by a
gradual decline. With the change in macroeconomic ‘regulation’ that we are
starting to see, one that emphasises more inclusive growth, is this really
credible?
• If the investment / GDP ratio must progress by
almost 4.5 points by 2032, then savings must follow; this is how
macroeconomic balances work! Where could this come from? In part from lower
savings in Europe heading towards the rest of the world. We have spoken
about a fall in external trade after all…
For the remainder, it will be necessary to choose between greater efforts by households to save, an increase in corporate profits and/or a decrease in the public deficit.
NONE OF THESE OPTIONS IS SELF-EVIDENT.
The first brings us back to the question of a reduction of household
consumption in GDP; we just saw it.
The second suggests a further distortion of wealth created in
favour of business. But that might go against current sentiment (new
‘regulation’, including the development of ESG – environment, social and
governance – criteria)…
The third seems reasonable, of course, but making the choice
between bringing current spending down and increasing tax income is no easy
task (public investment would most likely be protected).
If this scenario is not quite unacceptable, but still
seems a bit ‘messed up’, then we need to try to imagine what would be reasonable
to expect under the two constraints of succeeding in the twin transitions and
not deluding ourselves on future economic growth.
In fact, the adjustment can only be made in two areas:
either (i) on savings
placed in the rest of the world (the counterpart of the external trade
balance), with the possibility that flows would invert and that the eurozone
would need to ‘import’ foreign savings, or (ii) on a slowdown in consumption
spending (whether household or government).
The first solution would weaken Europe on the international
stage.
In terms of macroeconomics, Europe would appear less solid,
which would reinforce the impression already given by the microeconomy (lower
profitability of companies in the Old World compared with those in the New
World and smaller presence in the sectors of tomorrow) and by po li t ics
(unreso lved issues of integration and its geopolitical role). The resulting
financial balances will be more uncertain, whether in terms of interest rates
or exchange rates; it would be impossible to think otherwise.
The second idea, which is obviously akin to frugality, seems difficult to put in place in a more Keynesian environment that is stamped by an ambition to share wealth more in favour of households. That is, of course, unless public authorities find the winning formula to incentivise households to save more.
We understand it; the ambition to drive investment, for innumerable sound reasons, has destabilising macroeconomic effects. We must anticipate them and prepare ourselves; after all, prevention is better thancure…
Infrastructure assets across North America are aging and in urgent need of better operation and maintenance (O&M) practices. The traditional challenges with O&M are accentuated by recent disruptions of the status quo, with emerging challenges including the higher degree of complexity, in addition to stricter policies regarding environmental and social performance. We provide our brief insight on each of those topics based on our experience and Project Advisory expertise.
Due diligence ahead of a transaction generally focuses on analysing the finances of the entity in question, as well as its market and strategy, in order to prepare financial forecasts and determine its value.
In addition to these essential criteria, other elements, including tangible and intangible assets, infrastructure, intellectual property and the company’s customer portfolio, must also be taken into account and evaluated, depending on the business sector.
THE DATA AT THE HEART OF THE COMPANY
In an ever more digitalised economy, information systems take on an ever more central role, particularly when it comes to infrastructure. Intellectual property and customer portfolios are systematically digitalised, insofar as the intellectual property is often software and the portfolios are often databases. This group of assets, though heterogeneous in appearance but linked through their digital nature, is therefore often designated by the generic term ‘data’.
Such data, which remains important no matter the case at hand, has now become essential, particularly in industry and software, where processes are at the heart of value. Indeed, this applies to such an extent that a transaction is now perhaps less the sale or acquisition of a company in its classic sense and more the sale or acquisition of intellectual property and/or a customer/user database.
In short, acquiring a company comes down to buying data.
THE DATA AT THE HEART OF VALUE
With these developments in mind, significant changes in terms of valuation have come to light, especially considering that the generic term ‘data’ may well include a mix of technical data (codes or processes) and personal data with particular characteristics.
This data diversity has consequences: the wealth it covers whets the appetites of a wide range of parties, including those with few scruples between them. Moreover, public authorities, judging (with reason) that it is their duty to protect the personal aspect of this data, have created legal and regulatory frameworks for holding and using data, non-compliance with which may lead to considerable financial penalties.
INTEGRITY AND SECURITY OF DATA, ESSENTIAL CRITERIA FOR VALUATION
If valuing data lies at the heart of valuing a company in the context of a transaction, it is essential to scrutinise the data’s security and integrity. This applies not only for obvious security reasons but also for financial reasons linked to the objective valuation of the company and the regulatory environment in which it develops.
1. The cyber stakes in a transaction
The cyber stakes in a transaction are multidimensional, touching on highly technical aspects, corporate management and even legal and insurance elements.
The approach to these different topics must focus on two critical elements: (i) the precise identification and conservation of key company data and (ii) the constant consideration of dynamics specific to a transactional context.
It is therefore not merely a question of judging the ‘thickness of the ramparts’ of the company (i.e. the robustness of its information systems), but rather of tackling the topic in its entirety as part of its past, present and future development.
The issue is therefore vast. Key data, infrastructure, the legal framework, governance and cyberspace are all points to consider in the strategic and mobile context of a transaction.
Cyberstakes in transaction
Source: Accuracy
2. Incorporating cyber in a transaction
ON THE BUY-SIDE (BSDD)
A transaction from a buyer’s perspective is particular in that certain constraints set by the target must be accepted and the due diligence process may have to be conducted in a very short time frame. This set of circumstances is not particularly conducive to in-depth analyses before signing.
Nevertheless, the risk of acquiring an empty or compromised shell must remain at the forefront of the buyer’s mind throughout an acquisition process. The difficulty in obtaining a precise idea of the target before signing should lead to a phase of in-depth analysis between signing and integration, the aim of which would be to avoid any incidents and to be able to foresee any price revaluations and remediation, if necessary.
The following three-phase approach is critical to ensuring a productive and safe transaction.
ON THE SELL-SIDE (SSDD)
The transaction from a seller’s perspective offers greater possibility for a longer and more complete process, especially in the context of a complex carve-out from a group. During the due diligence process, adaptations or even improvements to the cyber policy can be made if necessary.
In any event, the approach should have two objectives:
• To secure the division of systems and data that will result from the transaction, in order to maintain the security and integrity of the remaining value during the process and afterwards
• To be financially assessable as part of the transaction as a guarantee of the value and integrity of the asset sold.
Again, the three-phase approach is critical.
3. For a strategic vision of threat analysis
Cyber threats must be factored into a transaction. Indeed, the parties involved in a transaction need an accurate picture of the asset at stake, and it is perfectly legitimate and necessary for them to take cyber threats into account in this process. In addition, because of the organisational changes it induces and its intrinsically strategic nature, a transaction gives rise to threats that can take several forms:
• An opportunist taking advantage of the vagaries of changes in governance and control
• An activist seeking to prevent an undesirable business combination
• A brutish competitor looking to compromise a transaction that may be unfavourable to its interests
• A state actor spying on or blocking a transaction that it considers a threat. These threats, which can even be cumulative (e.g. a state actor manipulating a militant to compromise a transaction for the benefit of one of its national champions, a competitor of the target) are not just the fruit of the imagination. Preventing them naturally raises the question of their attribution, or more simply of their origin.
This attribution is made possible by the analysis of technical data on cyber incidents within the company, the search for similarities with known elements (technical design and modus operandi). Done well, this attribution is very instructive. In particular, it can help to shed light on the competitive and geopolitical context of the company and to decipher the way in which its adversaries react to its strategy.
Such a detailed understanding, once acquired, naturally has a feedback impact on the company’s strategy, its design and its geo-economic implementation. It can lead to reinforcement, adaptation or reorientation with more informed arguments.
Far from being an area reserved for IT technicians and engineers and unrelated to the business challenges of a company, cybersecurity can therefore be an integral part of the company’s strategy.
Accuracy provided financial advisory services for Banque des Territoires (Caisse des Dépôts et Consignations) and RATP Capital Innovation in the context of strengthening their majority stake in the capital of the company.
Times are tumultuous and economic woes abound: high inflation, supply chain disruptions, low consumer confidence, foreign exchange issues and more. In spite of all this – or perhaps because of it – it is not so easy to make clear-cut conclusions. Indeed, for some economic indicators, current reality and short-/medium-term forecasts appear to be at odds with each other. So where does that leave us? In this edition of the Economic Brief, we will delve into the topic, with a particular focus on current inflationary trends and interest rates.
Nowadays, it is more usual for people to type text on computers and mobile telephones than write letters by hand. Consequently, handwritten notes are no longer the norm, the age-old art of handwriting has waned and, sadly, the writing and receiving of love letters have almost died out altogether. Even celebrities have noticed the change: in a 2014 opinion piece in The Wall Street Journal, the American Singer-Songwriter Taylor Swift wrote, “I haven’t been asked for an autograph since the invention of the iPhone with a front-facing camera.”
The digital age has undoubtedly enhanced our way of life in numerous ways, but it has negative and unpleasant consequences. Even though we still interconnect through alphanumeric symbols regularly, the probability of putting pen to paper is at a record low due to the use of digital communications in the workplace and at home. In addition, the use of formal salutations and signatures on emails has been dropped, e.g. the greeting ‘Hi’ has replaced the word ‘Dear’, and sign-offs have changed from the formal ‘Yours faithfully/sincerely’ to the informal ‘Kind regards’ or ‘Best wishes’. Contactless payments have become more popular recently, especially with the advent of the pandemic; most debit and credit card purchases no longer require a physical signature. Microsoft Teams and Zoom meetings have replaced face-to-face meetings and ‘signing’ for a delivery on the dotted line is now virtually non-existent.
The way we communicate has raised an issue for Forensic Document Examiners (FDE). They authenticate handwritten documents, notes and signatures to determine if they are genuine or fake, if alterations have been made and if a signature is a later addition. They also look at more extended forms of writing, such as letters, to determine whether a particular individual wrote a specific piece of handwriting. However, examiners require a sample to be able to compare them, so they collect writing samples identified to be from the individual in question. Therefore, as we write less by hand, other analytical methodologies must come into play to detect forgeries.
Casework
A wealth of crucial evidence can come from examining documents involved in investigations. A kidnapper’s ransom (a sum of money demanded or paid for the release of a captive) note can have unseen impressions that point to where the victim is being held. An affluent individual may have changed or altered the final will and testament, so a family member received a large sum of money. Similarly, a white-collar crime investigation may include the forensic examination of altered financial documents.
Most FDEs have examined signatures on countless cheques, wills, deeds and financial documents. They have scrutinised medical records to see whether a doctor’s signature was added later than initially indicated, perhaps after a claim was submitted. They also examine longer forms of writing, such as menacing or harassing letters and suicide notes; for example, if the apparent suicide victim did not write the message, the police might have a murder on their hands.
Handwritten forgeries
There are two types of handwriting samples: requested writing samples (formal) and collected writing samples (informal). Requested writing samples are gathered from the author under controlled and monitored conditions.
Informal samples originated from the author before the incident in question, usually in the ordinary course of their daily activities.
The selected samples need to have existed or come from approximately the same time period as is under investigation since handwriting changes over time. Therefore, having multiple recognised examples for comparison purposes is crucial, enabling the examiner to consider the changeability in the individual’s writing styles and to ascertain whether a particular piece of handwriting can be attributed to a given individual. It is worth noting that handwriting examination is distinct from graphology, aka “handwriting analysis”, which endeavours to discover character traits from handwriting examination.
Documents
Finding forged document(s), in part or whole, means that the document examiner may need to consider investigating further into the type of printed material present, the paper, any simulated security features, handwriting and signatures. Indented impression evidence may also link separate pages to one another and more.
When examining documents printed by electronic devices, one starting point is determining the type of printing technology involved and then establishing whether the questioned document is an original or a reproduction. It is also possible that multiple printing methods could have been involved in the creation, for example, a printed document that was subsequently photocopied. Once these facts are established, the examiner can examine the style and appearance, formatting and copy distortion.
The documents subject to the examination may contain markings from rubber stamps, embossed seals, watermarks, or other physically printed marks. Stamp analysis starts with the location of the ink source, which could be one of several, for example, the self-inking stamp or the handstamp. The examiner must first confirm that the stamp caused the markings and was not computer-generated. Next, the examiner can turn to the details of the markings, in particular any defects that may be distinctive to an individual stamp. These may be manufacturing defects, such as distortion or misalignment, or a result of use, such as accumulated ink or dirt, or misuse.
Before making supplementary impressions, it is vital to photograph the stamp to preserve its original nature and the chain of custody.1 The additional impressions need creating on individual sheets that resemble the original document(s). Care needs to be taken to ensure that the new impressions vary concerning pressure and the angle of application. This will result in the examiner making side-by-side comparisons with the original and/or suspect documents. This part of the analysis will also involve positioning signatures and stamps on a document, especially with the alleged use of “cut and paste” to create the forgery.
Detailed examination of computer printouts potentially allows examiners to conclude on the type of printer employed. It is conceivable, for example, that a specific unique defect in the printed document links the printout to an exact device, and identifying patterns provide a digital fingerprint pointing to a particular printer model and device, known as the Machine Identification Code (MIC). It is a digital watermark that certain colour laser printers and copiers leave tracking dots on every printed page. Security experts and journalists have put forward a case in which tracking dots led to Reality Winner being charged with removing classified material and mailing it to a news outlet in 2017.2 However, compared with typewriters, it is seldom possible to trace the questioned document back to the device that created it unless computer forensic analysis is employed. Such analysis also comes into play when a document is attached to an email (i.e. the suspect document is digital).
Another aspect is the work of the Questioned Document Examiner (QDE), whose job is to analyse documents subject to interest. The initial examination of questioned paper documents involves testing the colour, thickness, weight and weave pattern3 to conclude on the source of the paper. This work can also include documents that contain indented impressions, which are not visible to the naked eye. These can be recovered using an electrostatic detection device4 (ESDA). Indented impressions happen when an imprint is left on the page(s) beneath the one written upon. These impressions are used in investigations to connect evidence between pages or an incident, such as matching a ransom note to a notepad found in the suspected hijacker’s office.
1 Chain of custody, in legal contexts, is the chronological documentation or paper trail that records the sequence of custody, control, transfer, analysis, and disposition of materials, including physical or electronic evidence.
3 Weaving consists of arranging lengthwise threads (called the warp) side by side. Then crosswise threads (called the weft) are woven back and forth in one of many different patterns. Weave patterns vary and can be used as forensic evidence
4 ESDA works by applying an electrostatic charge to a document containing suspected indented writing. Indented writing (i.e., disturbed fibres) created from previously written documents on overlying pages can then be seen. In some cases, this method can be applied to develop fingerprints on documents.
Accuracy, an international advisory firm that brings its expertise to business leaders and decision-makers, and Causality Link, a financial information technology provider, today announced they have signed a partnership agreement to enrich certain areas of Accuracy’s strategic advisory services with the information produced by Causality Link’s platform.
Accuracy proud to announce that we are now a member of Campus Cyber, the nexus of cybersecurity in France, initiated by the president of the French Republic. With more than 160 members, Campus Cyber’s mission is to bring together digital security players to protect society and promote French expertise in the field.
Campus Cyber’s goal is to help our society grow towards digital security. They aim to train public and private players, give a better understanding of what cybersecurity entails and create synergies between stakeholders as to steer them towards better technological innovation. Organising events in this digital domain is also one of the services offered by their teams.
The Spanish banking sector continues to improve its results, and faces uncertainties regarding its resilience.
Macro scenario Overcoming the shock of Covid – at least from an economic perspective – the Euro area predicted a scenario of accelerated and prolonged growth. Improvements in the supply chain, substantial household savings and a good tourism season supported this thesis. However, the supply shock in key raw materials due to the Ukraine War has pushed the CPI up to levels not foreseen last year (10.8% in Spain in July). The projected baseline scenario suggests CPI rates above 4% in the Euro area until the end of 2023, a negative impact on household consumption, and on GDP, which will see less growth than had been expected in the coming quarters. […]
El periódico Expansión se hacía eco el pasado 5 de septiembre del impacto del nuevo impuesto especial en las principales entidades, según el nuevo estudio de Accuracy sobre el sector financiero.
A continuación podemos ver el gráfico en millones de euros en tres escenarios: base, crecimiento y desacelación, así como otros aspectos del completo estudio de la consultora.
1. Digital transformation in retail banking industry
Over the past 10 years, retail banking has experienced a new wave of digital transformation. We have seen the rise of FinTechs and TechFins, which have brought integrated and faster financial services to their customers, whether banked or unbanked. With more advanced and agile technology solutions, and potentially less restrictive regulations, these challengers were able to boom in both developed and developing markets. Nowadays, retail customers are increasingly asking for one-stop shops and seamless customer experiences after having experienced the super-apps. With the global pandemic over the past two years, this trend has only accelerated.
Against this backdrop, traditional retail banks are not left with much choice but to undertake necessary digital transformations to meet their customers’ expectations. Many have done so in the past few years, and the rest are certainly at least planning to do so in the near future.
Today, the industry is evolving rapidly. Open banking and APIs are driving the development of new ecosystems for financial services, in which multiple players, traditional and non-traditional alike, are competing against each other.
In this article, we will discuss the key trends in the retail banking industry, followed by an analysis of the future of traditional retail banks and digital banks.
2. Trends in retail banking transformation
Many retail banking transformations are happening in the market. We can broadly categorise them into three types: (1) data enablement, advanced analytics and data-driven business decision-making, (2) customer experience 2.0 and (3) automation of end-to-end services (i.e. adoption of technology for on-boarding, e-KYC, risk management, internal controls, etc.).
Figure 1: Key trends in retail banking transformation
Source: Accuracy analysis
Trend 1: Data enablement, advanced analytics and data-driven business decision-making
Decision-making processes in retail banking are making increasing use of big data as well as AI & machine learning. This trend has disrupted almost all aspects of banking, from how banks on-board customers to how they empower them. Meanwhile, this trend also serves as the cornerstone of other transformation and innovation trends that are reshaping the retail banking landscape.
It is clear to see that technological advancement has been the driving force behind retail banking evolution. The advent of modern computers significantly accelerated banking processes and enabled computations that had been impossible beforehand. The use of the internet, which facilitates information communication and flattens the financial world, represents another huge technological leap in the retail banking industry. And most recently, over the last decade, the penetration rate of smartphones has been rising significantly, resulting in the proliferation of mobile banking. In the future, we believe that big data and AI & machine learning will drive new waves of transformation.
Big data use case – customer segmentation. One such example is micro-segmentation. With the increased use of data and advanced AI capability, retail banks can generate dynamic and granular client segments. The development of big data analytics and the increasing awareness and accessibility of alternative data have gradually enabled banks to make use of more valuable data in a cost-effective way. Retail banking has long been a data-driven business, where data is generated at every stage of the customer journey. However, in the past, most banks did not have an efficient process or the necessary IT infrastructure to realise the data’s potential. But traditional data use was just the tip of the iceberg. Huge amounts of alternative data, whether structured or unstructured, are generated every second from various data sources internally and externally. The value of data can be further ‘mined’ if it is combined with AI and machine learning techniques. Granular segmentation of customers can help the development and marketing of hyper-personalised products and services, as well as the optimisation of product pricing.
Alternative data use case – tapping into underserved segments. The use of alternative data not only improves customer segmentation but also enables banks to assess the creditworthiness of untapped customer segments. This helps to extend financial services to the two billion unbanked adults globally. Another opportunity lies in the SME market. Currently, many SMEs are underserved by traditional banks or are often subject to unsatisfactory credit terms. As a result, many FinTech and TechFin firms embrace the opportunity to provide alternative financing to SMEs. Kabbage, which was acquired by American Express in 2020, uses an automated lending platform to provide finance to small businesses and consumers. The FinTech company uses alternative data, such as business volume, transaction volume, time in business, statistics from e-commerce platforms or even social media activity to approve the funding. Additional benefits provided by digital SME cash flow solutions also include simplicity, speed, and transparent and real-time monitoring for small businesses.
Machine learning use case – credit decision-making. Another trend is leveraging machine-learning models to optimise business decision-making. The best way to imagine machine learning is to think of it as a replacement for traditional rule-based models. One real-world example of a rule-based model is a credit scorecard. Traditionally, a scorecard is developed by picking risk factors with high predictive power from a full list of human-determined risk factor candidates. The scorecard is a static, rule-based model for banks to make credit decisions. However, these mechanisms are not designed to capture complicated relationships and can become outdated if not enhanced frequently.
Nevertheless, a machine-learning model requires much more data for training. The calculation algorithm might also explore more complicated and non-trivial relationships between data. These are the underlying reasons why ML models can outperform traditional ones. By way of illustration, several FinTech start-ups in retail lending, such as Affirm and Upstart in the US, have developed ML models that, according to their analysis, can outperform their FICO counterparts by a wide margin (approving more customers with lower losses or incurring fewer losses with the same amount of customers).
Figure 2: Affirm ML modelling example
Note: For a given level of risk, our proprietary model is capable of accepting significantly more applications when compared to FICOS’s scoring methods through a superior ability to price risk. Alternatively, for any given consumer sub-segment, our model produced lower risk outcomes than FICO scoring. Source: Affirm Investor Presentation (March 2022)
Other use cases. A number of other areas exist where banks can make use of AI and machine learning. In retail banking, advertising optimisation, customer segmentation, fraud detection, advanced risk analytics and data-driven decision-making, to name but a few, naturally come to mind as the transformations are already happening. Considering the wider picture, machine learning can also help asset pricing, market making, recruitment, procurement and more. In total, as long as banks have quality data to work with, use cases for ML models can be very broad.
Banks are increasingly devoting more resources to developing analytics toolkits and improving their AI and machine learning capabilities. These help boost revenue, lower costs and improve overall efficiency. In addition to big data and AI & machine learning, other sub-trends in powering new technologies, such as shifting to the cloud and the expanding use of APIs to unleash digital potential, are unfolding.
Trend 2: Customer experience 2.0
In an increasingly challenging, dynamic and uncertain business environment for retail banks, where regulations are continually changing, interest rates are consistently low, competition is fierce and consumer behaviours and expectations are shifting, bespoke customer services to deepen customer relationships have become essential.
Moving from a product-centric model to a customer-centric model. A shift to a more customer-centric strategy is critical for retail banks to thrive. One key reason for adopting a customer-centric approach is the growing number of millennial and Generation Z customers, who, compared with older generations, have different expectations. For example, they want to have total control over banking information, a seamless and digital banking experience, and more diverse and innovative financial and banking product offers. Because of growing and shifting expectations, retail banks have been forced to pivot strategically, such as through the creation of new banking channels, product innovation and even the implementation of new organisation charts (e.g. creating new positions such as head of digital banking and head of customer lifecycle management).
Another key aspect here is to organise data around customers instead of products. In the past, banks usually stored data according to product types (i.e. credit cards, loans, etc.); however, with the help of advanced data analytics, banks now focus more on building a holistic view of their customers by leveraging both traditional and alternative data.
In delivering a superior customer-centric approach, banks should develop and monitor their client-centric key performance indicators (“KPIs”), which would help streamline customer experience metrics and accurately identify areas for improvement. For instance, client-centric KPIs about retail banking products can cover monthly active users per product, app use duration per product, time taken to obtain funding, etc. These metrics focus more on the customer’s use and engagement experience, instead of the traditional product-oriented focus (e.g. number of loans per year).
Another important aspect of a customer-centric approach is advisory service. As financial services become digital, it is vital for banks to maintain the human element and a constant connection with their customers. Banks need to ensure quick or even real-time interactions, which could be enabled by a combination of robotic advisers and human advisers. Furthermore, a new trend is coming to light in which clients demand more individualised budgeting, spending, and even investing advice, which can be seen as a critical field for banks to differentiate themselves. For example, Singapore’s OCBC bank has launched its robotic investment service, RoboInvest, to offer its customers a simple and digital investment advisory experience.
Enhanced super-app experience. With rapid transformations already under way, banks need to move beyond the surface of digitalisation. A growing trend is to enrich the service offerings provided in mobile apps. The ultimate goal is to develop a one-stop shop, like a “super app” platform, to cater to customers’ holistic financial needs and other needs in their daily lives. One example is the WeChat app from China. It has grown from a simple messaging app to a super app with a plethora of services ranging from digital wallet services, money transfers, payments, finance and investing, taxi and transportation services, hotel and tour booking services, ecommerce, social life and even gaming.
Whilst it is not necessary for all banks to create a super app, it is clear that the concept of moving money from one’s bank account to other places seamlessly is gaining traction. As a result, core features such as banking, financing and investment and payments should be integrated into the retail banking experience. For example, customers may wish to invest in funds, stocks or even crypto/ NFTs within their banking apps. They also need easy access to loans within the apps. Payment services, which traditional retail banks paid less attention to, are also essential for a seamless digital experience. Several disruptions in the payment field can be observed. For example, with the need for contactless payments amid the pandemic, the industry has seen increasing adoption of digital payment methods (i.e. Apple Pay and Google Pay). The ecommerce trend also serves as a strong tailwind for digital payment. In addition, in terms of credit cards, there is a growing trend of buy now pay later (BNPL) solutions. Overall, with banking, financing and investment and payment experiences going digital, the industry has seen a convergence in these services in one super app. One well-known example of a super finance app is the Cash App ecosystem developed by Block, Inc. (formerly known as Square). Starting from supporting seller activities, the FinTech giant has expanded to a consumer ecosystem and launched the cash app. With an impeccable digital experience provided within one single app, Cash App has become one of the leaders in consumer financial apps in the industry.
There are multiple benefits to offering an enhanced digital experience and even a super app. First, a good digital experience can increase customer loyalty and reduce churn rate. Second, it acts as a marketing tool to attract new customers and create a network effect among friends and colleagues. Third, the more a customer’s financial journey occurs within the app, the more data is collected to generate valuable business insights. Fourth, banks can cross-sell new products or expand to new markets, further monetising the customer base. Finally, the partnership with companies in the broader ecosystem (e.g. loyalty reward schemes with partners) can also unlock enormous business potential.
Trend 3: Automation of end-to-end services and the use of blockchain technology
The third major trend in the retail banking industry is automation. The ultimate goal for automation is to reduce decision time, save costs, improve productivity and efficiency for banks and provide a faster banking experience for customers. In the following figure, we can see the most common areas with potential for automation and their respective cost/volume considerations at the banks’ risk and finance & operations functions.
Figure 4: Key processes for automation in the retail banking industry
Source: Accuracy
Banks should ask themselves which procedures, whether customer-facing or internal, can be automated across the customer lifecycle. Many banks are currently aiming to automate as many manual procedures as possible, such as loan application approval, funding, fraud detection and document verification. The advantages of automation can be numerous. It allows for significant cost saving, increased volumes, faster decision-making, improved customer services and reduced risk of human error. However, one factor worth bearing in mind is that, despite the automation, the decision engines of retail banks should retain some degree of flexibility in order to navigate the dynamic environment successfully and deal with unexpected bespoke demand.
Blockchain or distributed ledger technology (DLT) is another trend that is revolutionising the banking industry by streamlining end-to-end business workflows. This technology can provide automation, trust and security for banks and their customers. The most common use cases include trade finance, cross-border transactions, insurance, payment and settlement, and asset tokenisation, among others. For example, Contour is a decentralised platform developed by a consortium of eight banks (Bangkok Bank, BNP Paribas, Citi, CTBC Holding, HSBC, ING, SEB and Standard Chartered) and three delivery partners (Bain Consulting, CryptoBLK and R3). It is a global digital trade finance network that uses blockchain technology to provide end-to-end trade-related services connecting businesses, lenders and partners seamlessly and securely in real time. Although blockchain technology is currently used mostly in corporate banking, it is expected that the use case will gradually spread to retail banking.
3. Retail banks in the future
The majority of banking activities now take place online; more and more people are familiar with internet banking and mobile banking. However, in the post-pandemic era, the reality is that having a mobile app that simply moves the traditional banking experience to a digital format or having some degree of automation is not enough. In the future, banks must rethink their entire business model to take their customers’ experience to the next level. With all of the transformations taking place, retail banking will never be the same. Traditional retail banks and virtual banks must identify and capitalise on their respective strengths while attempting to mitigate their weaknesses.
Future of traditional retail banks
We believe that banks should view innovation and digital transformation as a growth engine instead of simply reacting to the disruption brought by FinTechs and TechFins. Traditional players have advantages in terms of bigger customer bases and the amount of resources they can invest in technology infrastructure. They also have a broader range of products and solutions for their customers, and stronger brand names to gain customers’ trust. When compared with younger and less experienced digital banks, these make a significant difference.
Figure 5: Key benefits of traditional banks
Source: Accuracy
Traditional banks have their weaknesses too. The main concern is the customer experience. In today’s fast-paced environment, customers often find traditional banks to be slow. There always appears to be a long queue at the branch. Applications are always time-consuming and the paperwork seems endless. Despite some degree of digitisation, when it comes to speed, traditional banks lag behind their digital counterparts. When compared with services offered by technological players, the gaps are even wider.
Another concern is resistance to change. Just as traditional carmakers are hesitant to commit fully to electric vehicles for fear of cannibalising the pipeline of their current vehicles, traditional banks are reluctant to move their distribution channels online, which may compromise their current success.
High running costs are another disadvantage. Traditional banks incur large costs associated with physical premises (such as branches and ATMs), which represent a significant burden when clients no longer visit these facilities. The army of staff becomes costly as well. In addition, the lack of agility of legacy systems also hampers business transformation. Often, there is limited room for manoeuvre to reduce staff or implement massive cost reduction programmes. Regulatory and political surveillance also limits the options for such actions.
In the future, traditional players need to deploy strategies that maximise their strengths and minimise their weaknesses; speed to adopt transformations will become key.
Greater concentration and restructuring is also likely in the future of retail banking. Concentration aims to sell all businesses that are far from the banks’ main markets in order to be as large as possible domestically (e.g. sale of Bank of the West by BNPP, sale of French retail banking by HSBC). Restructuring, particularly in Spain but also in France (e.g. the merger between SG and CDN), aims to lower a bank’s breakeven point. Further, a trend whereby traditional retail banks acquire or partner with emerging FinTech players has come to light, in order to build specialised businesses rapidly. For example, J.P. Morgan acquired OpenInvest, a leading FinTech start-up that assists professionals in providing tailored value-based investment solutions, benefitting the bank’s Private Bank and Wealth Management advisory service offerings. Another is the leading AI lending FinTech Upstart, which partners with several US regional banks to help them grow their customer lending portfolios with seamless digital experiences.
Meanwhile, traditional players will have to rethink their distribution models in light of digitisation and convergence of services. Such convergence leads to embedded finance, which integrates financial products directly into the customer’s purchase journey.
Future of digital banks
Digital banks, or virtual banks, are FinTech companies that provide mobile and online banking solutions via apps, software and other technologies. They are digitally based, with no physical facilities. They typically specialise in certain financial products and services, catering to specific customer segments. For a digital bank, there are several key steps to consider for continual business success.
Figure 6: Key steps in a digital bank’s business development
Source: Accuracy
Digital banks have several advantages over traditional players. The first stems from their technological foundation. They are digital natives who build the required IT infrastructure from the ground up and are able to provide the right level of customer experience with more flexibility (e.g. 24/7 services). In addition, smaller digital banks typically specialise in certain segments and can provide a more tailor-made customer experience, which is difficult for large generalist banks. They can also add some features such as budgeting and personal financial planning to enhance the customer experience. In terms of operating costs, their smaller size allows them to be more agile than their traditional counterparts, both in terms of labour costs and fixed costs, as they have no physical branches. The nature of being virtual also makes this new type of bank more eco-friendly.
Figure 7: Key advantages of digital banks
Source: Accuracy
Their disadvantages frequently stand in direct opposition to their strengths. The limited service offering allows digital banks to focus on and better meet specific needs; however, in a world where technology allows for the emergence of multi-service platforms and customers expect increasingly simple answers and one-stop shops, addressing only part of the financial services needs is not necessarily the right direction in the long term. One of the major consequences of this limited supply is the inability to capture the most profitable customers of retail banking: the premier banking accounts. This explains their low income, which peaks at EUR 20 per customer against EUR 300 to 400 for traditional banks in Europe.
But this does not necessarily rule out a bright future for digital banks. As they shift their focus from growth to profitability, they need to act quickly in terms of rolling out new products and establishing partnerships.
One thing of note is that the digital banking landscapes in developed and developing countries differ significantly. The road is often wide open for them in underbanked countries. For example, Nubank, a Brazilian digital bank, has around 54 million customers in Brazil, Colombia and Mexico and has a market capitalisation of USD 37 billion.
The situation is different in ultra-banked countries like France. Digital banks there are frequently forced to make difficult choices due to low profitability and a trend towards convergence in financial needs. They can quickly broaden their range of products to meet more customers’ needs or partner with merchants to offer unique products. They can also explore selling their business to traditional players or even technology giants who are eager to tap into the field.
Regulatory support
Increasing regulatory support for both traditional and digital banks is a crucial driver of banking transformation. For example, the United Kingdom has launched the Open Banking standard in order to improve customers’ banking experiences and to encourage competition and innovation in the industry. The standard covers five core components: API specifications, security profiles, customer experience guidelines, operational guidelines and a reference library. Since its inception in 2017, over 230 third-party service providers and more than 90 payment service providers have joined the Open Banking ecosystem.
The concept of a FinTech sandbox is also critical in the industry to facilitate FinTech innovations. It allows industry players to experiment with creative ideas, collect real-life data and perform pilot testing in a safe environment. This also helps speed up product launches (without full regulatory compliance), reduce development costs and refine the consequence of failures. The Hong Kong Monetary Authority (HKMA) introduced the Fintech Supervisory Sandbox (FSS) in 2016, which was upgraded to FSS 3.0 in November 2021. The Monetary Authority of Singapore (MAS) has also established the FinTech Regulatory Sandbox framework since 2016.
Final remarks
The battle will only become more fierce in the next 10 years. Apart from traditional players and digital players, there are also formidable contenders joining the game: technology companies (e.g. Apple launched Apply Pay and Apple Card), retailers (e.g. Walmart acquired several FinTech start-ups to develop an all-in-one app for consumers to manage their money) and even blockchain technology (e.g. Bitcoin surpassed PayPal in transaction volume). Looking ahead, we believe that in a world filled with both opportunities and challenges, both traditional retail banks and digital banks will need to accelerate their transformation. They will have to constantly reshape themselves and rethink the business models to suit the needs of their customers, staff and society as a whole.
For our fifth edition of the Accuracy Talks Straight, Jean-François Partiot presents the editorial, before letting Romain Proglio introduce us to Rnest, a piece of software that helps to resolve problems using internet data. We then analyse the residential property prices in Paris with Nicolas Paillot de Montabert and Justine Schmit. Sophie Chassat, Philosopher and partner at Wemean, explores how to learn simplicity. And finally, we look closer at the dynamics of corporate credit spreads with Philippe Raimbourg, Director of the Ecole de Management de la Sorbonne and Affiliate professor at ESCP Business School, as well as the liberal revolution with Hervé Goulletquer, our senior economic adviser.
For this editorial in summer 2022, I would have liked nothing more than to wish you a lovely, light, magical summer.
Unfortunately, war has taken hold at Europe’s door, prices are exploding and the planet is suffocating.
My pen must be wiser.
Summer is a time to step back and reflect; let us take advantage of it to relearn…
– to relearn simplicity to find the taste for simple and efficient action again. The complexity dogma paralyses us; let us unburden ourselves of it! (The Cultural Corner with Sophie Chassat)
– to relearn to live together around the concept of the common good and aim for reasonable economic development in the long term. (Economic point of view with Hervé Goulletquer)
– to relearn to invest for the long term with limited resources, whether:
In technologies of the future:
• In Start-up Stories with Romain Proglio, you will discover Rnest, software that helps to resolve problems using the Web – how to dive into the depth and complexity of the Web to surface again with simple and understandable answers!
Via major corporate groups:
• In a context of the sudden rise in interest rates and tightening macroeconomic conditions, we must relearn the link between financing conditions and the financial structure of a group. The equity and debt markets are closely linked and their developments are interdependent, meaning that managers must carefully assess the cross-impacts of changes in their funding. (The Academic Insight with Philippe Raimbourg).
In real estate: • Real estate is an asset class considered highly safe and predictable, particularly in population and activity pools as rich and dense as Greater Paris. But to what extent is this still true post-2020 after the unfolding of the public health crisis and the wave of capital injected into the economy at negative real interest rates? (Industry Insight with Justine Schmit and Nicolas Paillot de Montabert).
This summer, let us take inspiration from Erasmus and his humane wisdom.
‘It is the greatest of folly to wish to be wise in a world of madmen.’
‘The whole world is home to us all.’
So, let us be wise; let us be mad; but let us be respectful of one another and to the generations to come!
Rnest is a piece of software that helps to resolve problems using internet data. Philippe Charlot, its founder, started with a simple fact: 90% of information useful for deci s ion-making i s avai l able on the internet. Yet finding the right information is particularly complicated: sources are innumerable, traditional searches refer to only a limited number of results and the time necessary to read, understand and summarise the web pages presents a significant obstacle. When faced with an issue, a user will typically ask a question on a major search engine (Google, Bing, Quant or Yahoo to name only the largest of them) or, for the more enlightened, on monitoring software (Quid, Palantir, Digimind or Amplyfi). These search engines and pieces of software search for keywords in web pages, on predefined sources and for almost zero productivity.
However, thanks to Rnest, the user can make a request for which the software will undertake a precise exploration of the internet in a URL, in hypertext or even in nearby text and will proceed to validate the pages visited precisely at phrase level (not just page level). This gives a result that is much more precise and relevant. Rnest is capable of exploring almost 250,000 web pages in a few hours. The software is also capable of proposing a problematised summary note in response to the initial request.
‘Find out what nobody knows yet, just with what you know’, Rnest promises. After formulating his or her question, no matter how complicated, the user initiates the search, and Rnest explores the web in real time to extract the most relevant content.
This artificial intelligence, made in France, navigates the web fully independently, is inspired by human behaviour and fulfils extremely varied needs across all sectors of activity. For example, based on the question ‘what are the innovation strategies of the top 200 French companies?’, Rnest will visit one million web pages, effectively saving the equivalent of 833 days of reading.
Among its first clients are BNP Paribas, Bouygues Télécom, Total, EDF and now Accuracy. Indeed, Accuracy will benefit from Rnest’s power in Open Source to enrich its advisory services to its clients.
Is real estate at the peak of its cycle? The Paris example.
What should we think of the stability in Parisian residential property prices, despite the public health crisis?
Since March 2020, the Covid-19 pandemic has shaken up the global economy, bringing about changes in numerous sectors, including residential property, in particular.
This public health crisis is behind a change in economic paradigms that have been in place since the 2010s. The eurozone is currently facing a significant increase in inflation, which reached 5.2% in May 2022, a level unseen since 1985. Access to mortgages for individuals is gradually becoming more difficult.
However, despite this context and in contrast to previous crises, the price per square metre of existing housing in Paris has not experienced any significant decrease; in fact, it has remained relatively stable.
In this situation, two opposing theories have come to light: one the one hand, some consider that the consistent rise of existing housing prices in Paris is justified by its unique nature, the City of Lights, which shelters it from economic cycles; on the other hand, some worry about a property bubble in the capital that is on the verge of bursting.
WHAT DO THE FIGURES SAY?
According to the Parisian notary database, the price per square metre of existing housing rose from €3,463 to €10,760 between 1991 and April 2022, an increase of around 3.6% per annum on average. Inflation over the same period stood at around 1.8% per annum on average, according to Insee. In short, the value of a square metre in Paris grew on average twice as quickly as inflation. In the graph below, we can observe the curve representing the actual increase in property prices per square metre in Paris versus a curve representing the 1991 price per square metre subsequently inflated each year using the Insee inflation rate.
This graph highlights two observable phases:
• In the period from 1991 to 2004, the actual price per square metre remained below the 1991 Insee-inflated price. Property prices grew significantly in the period prior to 1991 then underwent a major correction of around 35% between 1991 and 1997. Only in 2004 did the actual Parisian property price curve exceed the Insee-inflated curve.
As a reminder, 1991 marked a high point in the cycle, completing an upward phase of speculation by property dealers in Paris, and the beginning of what certain experts would call ‘the property crisis of the century’.
• In the period from 2004 to 2022, the actual price per square metre grew massively, much more quickly than economic inflation: +5.0% per annum on average for the actual price versus only +1.8% for inflation. There is therefore major disparity between the development of residential property prices in Paris and the average increase in standard of living.
Moreover, it is worth noting that between 2020 and 2022, the price per square metre in Paris did not experience any major fluctuation, in contrast to previous crises (1991 or 2008).
The first quarter of 2022, however, sees the return of significant inflation, with no repercussions on the actual property prices at this stage.
Is this due to increasing demand?
Many defend the following theory: demand for Paris is growing, whilst supply is limited, which has resulted in the constant rise in prices per square metre, no matter what stage of the economic cycle is prevailing.
The demographic reality is not quite so categorical. In 1990, Paris had a population of 2.15 million; this grew to 2.19 million in 2020, with a peak of 2.24 million in 2010. Further, since 2021, the city’s population has been falling to reach 2.14 million in 2022. Indeed, some Parisians, finding the health restrictions rather trying, decided to leave inner Paris for the inner and outer suburbs or other regions of France altogether.
The trend to leave inner Paris can also be seen among the households that returned from London following Brexit.
This declining trend comes hand in hand with an increase in demographic pressure in the rest of the Ile-de-France region (excluding Paris). The departments in the inner and outer suburbs have seen their population grow from 8.5 million to 10.3 million people between 1990 and 2022.
Therefore, since 1990, Paris has experienced a relatively stable demographic dynamic, even starting to decline from 2021. We can conclude then that demand does not seem to be behind the significant rise in the actual prices of residential property in Paris.
Is this due to decreasing supply?
In Paris, transaction volumes are higher during periods of increased prices (between 35,000 and 40,000 transactions per annum), whilst these volumes fall significantly during periods of decreased prices (25,000 to 30,000 transactions).
It seems high time to put an end to a common misconception: a fall in the volume of properties for sale does not automatically increase prices.
The economic reality is different: when prices are high, property owners are more inclined to sell their property, either to crystallise a capital gain or to undertake a buy–sell transaction (incidentally, often in the reverse order) because they have confidence in the market. Conversely, when prices are shrinking, the market seizes up. Property owners delay as much as possible their potential sales waiting for better days.
This leads to the following conclusion: the classic economic mechanisms of supply and demand simply cannot explain the historical growth in residential property prices per square metre in Paris.
These price dynamics should really be considered as ‘contra-economic’: supply grows in volume when prices increase; supply falls in volume when prices decrease.
When concentrating our analysis on the recent public health crisis, we can observe that transaction volumes decreased in the Parisian market. Indeed, the residential property market in Paris experienced a dip from the first lockdown, falling from 35,100 transactions per annum to 31,200 in 2020 to return to 34,900 in 2021. This change can be explained in particular by the specific structure of the lockdown, with investors unable to complete the purchasing process for residential property (visits, meeting with the notary, move, etc.).
When the strict public health measures were lifted, the property market was able to recover quickly.
WHAT ARE THE CONSEQUENCES OF THE COVID-19 CRISIS ON THE RESIDENTIAL PROPERTY MARKET IN PARIS?
For some investors, the change to our way of life due to the public health measures – remote working and leaving Paris – was to lead to a fall in Parisian property prices, or even to a bursting of the property bubble comparable to that of 1991.
Marked by the successive lockdowns and discouraged by the more difficult conditions to obtain a mortgage, people could have started a mass exodus from Paris, resulting in a fall in residential property prices in the city.
We can see in the graph below that the public health crisis appears to have had little impact on the price per square metre in Paris. Prices have flattened, or slightly decreased, but have not fallen below the bar of €10,000 per square metre on average.
WHAT ARE THE REAL DRIVERS EXPLAINING THE INCREASE IN RESIDENTIAL PROPERTY PRICES PER SQUARE METRE IN PARIS?
As we cannot use demography and standard economics to explain the long-term increase in prices observed, what are the variables that really can explain this sharp increase?
To answer this question, we have built a multi-variable regression model using a long historical series (1990–2022), which comes to the following conclusion:
Since 1990, the development of residential property prices per square metre in Paris can be explained ‘entirely’ and ‘mathematically’ by two financial variables.
To put it simply, this means that it is possible to explain—and potentially predict—the price per square metre of residential property in Paris with an extremely high degree of accuracy using only two financial variables.
– For those familiar with such techniques, our multi-variable regression model reached a correlation index (R2 ) level of 94%1.
The first variable involved is the following:
– Variable 1: the spread between the French 10-year OAT rate and Insee’s inflation rate.
As shown in the graph below, taken in isolation, this variable
In simple terms, this variable represents a borrower’s interest rate adjusted for economic inflation, that is, the net real interest rate for the borrower.
This variable thus makes it possible to take into account the attractiveness of the resources available to the borrower to acquire a residential property.
The spread highlights the impact of French 10-year OAT rates in the development of property prices per square metre. Indeed, when the French 10-year OAT rates fall, the borrowing capacity of an individual borrower rises significantly. For example, if an individual borrower’s rate decreases by one point (100 basis points), his or her borrowing capacity increases by approximately 10%. But the Parisian property market incorporates this component in the development of prices per square metre. The fall in rates enables a rise in borrowing capacity for buyers but not in terms of the number of square metres that they can buy. The market absorbs any increase in borrowing capacity in the price per square metre.
Furthermore, this variable takes into account the effect of inflation on the property market. The year 2017 marks the beginning of a scissor effect between the French 10-year OAT rate and inflation. Interest rates remained stable whilst inflation picked up significantly. For the first time, the spread (10-year OAT – inflation) became negative, meaning that for the first time, individual borrowers could borrow at negative net real rates.
The scissor effect has intensified since 2018, bringing about the continued rise in residential property prices per square metre in Paris between 2018 and 2020.
But since 2021, the striking rise in inflation coupled with the stable low base rates have been behind a financially untenable spread. This spread has developed from (0.6)% in 2020 to (3.6)% in 2022. Over the same period, prices per square metre have started to decline just as the rise in the cost of living has accelerated.
The current intention of the European Central Bank (ECB) to increase base rates in order to curb inflation should gradually attenuate this historic scissor effect. But prices per square metre of residential property in Paris appear to have begun a noteworthy decline.
The fall in spread is not the only or even the best driver explaining the historical increase in prices per square metre in Paris.
The second historical variable is the following:
– Variable 2: the size of the ECB’s balance sheet.
– Taken in isolation, this variable explains the price per square metre with an R2 of approximately 94%. It itself is highly correlated with the first variable, owing to the coordination of decisions on the development of the ECB’s monetary policy for these two variables.
This variable highlights the consequences of the quantitative easing policy implemented by the ECB on the valuation of financial asset classes, including Parisian property.
To enable the members of the Eurogroup to face various economic crises (including the public health crisis), the ECB put in place in 2009 an ambitious quantitative easing policy, similarly to the Fed, with the aim of ensuring the stability of the euro by injecting a vast quantity of currency into the market.
The supply of this monetary mass to banks and the maintenance of low base rates are behind the historical rise of residential property prices in Paris.
Based on our analyses, it is possible to correlate the historical development of prices per square metre in Paris with the size of the ECB’s balance sheet at 94%.
The graph below presents the development of the ECB’s balance sheet, showing its consistent growth since 2009. This strong growth is the fruit of the implementation of unconventional monetary policies to respond to the crises felt by the Euro-system in 2009, 2011 and 2020. By massively acquiring public and private debt on the European market to serve the refinancing demands of banks, the ECB has created favourable financing conditions in the eurozone in a context of crisis and very low interest rates.
Since 2009, the ECB has implemented two ambitious net asset purchase programmes: the Asset Purchase Programme (APP) and the Pandemic Emergency Purchase Programme (PEPP). These programmes are behind an unprecedented increase in the size of the ECB’s balance sheet.
However, though we can observe another doubling of the ECB’s balance sheet between 2020 and 2022, the price per square metre of property in Paris has slightly fallen, compared with a considerable increase over the period from 2011 to 2021.
This is a major break in the trend.
CONCLUSION
Over the long-term historical period, we have observed that the development of prices per square metre in Paris is highly correlated with the monetary policy of the European regulator, in particular via two variables: the spread (10-year OAT – inflation) and the size of the ECB’s balance sheet.
Between 1999 and 2020, a mathematical formula makes it possible to predict with a high degree of relevance the development of prices per square metre in Paris. In short, it suffices to listen to the European Central Bank and to anticipate and model its decisions.
But the years 2021 and 2022 have been marked by a drastic change in macroeconomic indicators.
Inflation has returned to levels not experienced since the 1970s (5.2% in May 2022), which disrupts the spread rate.
Similarly, the ECB’s decision to put in place a massive debt purchasing plan in the context of the public health crisis led to the doubling of its monetary balance sheet, with no particular effect on the price per square metre in Paris.
The two variables that were the drivers of the rise in prices since 1999 can no longer explain the development of the price per square metre of residential property in Paris from 2020 onwards.
The model is broken.
This likely marks the beginning of a wait-andsee period that could lead to a fall in both volumes and prices per square metre (versus inflation).
What remains to be seen is how long the property investor of 2020 will have to endure this market correction and whether Parisian property will play its role of a safe haven investment as it did during the inflationist period of 1970.
Sophie Chassat Philosopher, Partner at Wemean
Learning simplicity again
To stop seeing everything through the prism of complexity: that is without doubt the most difficult thing that we must learn to do again. It is the most difficult because the paradigm of ‘complex thought’ (Edgar Morin1) has taken over. Our everyday semantics is evidence enough of this: all is ‘systemic’, ‘hybrid’, holistic’ or ‘fluid’. No matter where we look, the ‘VUCA’ world (for volatile, uncertain, complex and ambiguous2) stretches as far as the eye can see.
Yet, applied to every situation, this complexity dogma actually makes us lose understanding, potential for action and responsibility. First, we lose understanding because it forces on us a baroque representation of the world where everything is entangled, where the part is in the whole but the whole is also in the part3, and where the causes of an event are indeterminable and subject to the retroactive effects of their own consequences4. Referring the search for truth to a reductive and mutilating approach to reality, it also encourages the equivalence of opinions and accentuates the shortcomings of the post-truth era5.
Second, we lose potential for action because from the moment everything becomes complex, how can we not be consumed with panic and paralysis? Where should we start if, as soon as we touch just one thread of the fabric of reality, the whole spool risks becoming even more entangled? Our inaction in the face of climate change derives in part from this representation of the problem as something of endless complexity and from the idea that the slightest attempt to do something about it would raise other issues that are even worse. The fable of the butterfly’s wings in Brazil that generates a hurricane at the other end of the world leads to our inertia and impuissance. Yet ‘the secret to action is to get started’6, as the philosopher Alain put it.
‘It’s complicated’ therefore becomes an excuse not to act. Whilst the state of the world requires us to commit to action now more than ever, today we are seeing a great disengagement, visible in both the civic and corporate worlds. Referring to effects of the system, the complexity dogma takes away individual responsibility. Learning to think, act and live with simplicity appears more urgent than ever. But the path is not easy. As the minimalist architect John Pawson put it: ‘Simplicity is actually very difficult to achieve. It depends on care, thought, knowledge and patience.’7 Let us add ‘courage’ to this list of ingredients, the courage to question a triumphant representation of reality that may well be one of our great contemporary ideologies.
____________
1Published in 1990, the book Introduction à la pensée complexe [Introduction to complex thought] by Edgar Morin presents the main principles of complex thought.
2The acronym VUCA was created by the US army in the 1990s.
3Edgar Morin call this idea the ‘holographic principle’.
4This is what Morin calls the ‘recursive principle’.
5This is a possible interpretation of another principle of complex thought, the ‘dialogical principle’.
6Alain, translated from the French ‘Le secret de l’action, c’est de s’y mettre.’
7The book Minimum by John Pawson was first published in 2006.
Philippe Raimbourg Director of the Ecole de Management de la Sorbonne (Université Panthéon-Sorbonne) Affiliate professor at ESCP Business School
La dynamique des spreads de credit corporate
The analysis of credit spread dynamics largely relates to the analysis of financial ratings and their impact on the quoted prices of debt securities.
This issue has been documented regularly for over 50 years and has led to numerous statistical studies. For the most part, these studies are consistent and highlight the different reactions of investors to cases of downgrading and upgrading. Observing the quoted prices of debt securities highlights the financial market’s expectation for downgrading, with quoted prices trending significantly downwards several trading days before the downgrade itself. On the agency’s announcement date, the market’s reactions are small in scale. By contrast, upgrades are hardly ever anticipated, with debt security holders particularly vigilant so as not to incur capital losses as a result of a downgrade. It is worth noting that because of the limited maturity of the debt securities, buying orders are structurally higher than selling orders; as a result, the latter are more easily seen as signals of mistrust by the market.
More recent studies have focused on the impact of rating changes on the volatility and liquidity of securities. Downgrades are preceded by an increase in volatility and a wider bid-ask spread, demonstrating a fall in liquidity; uncertainty as to the credit risk of the security in question leads to different reactions from investors and disparate valuations. Publishing the rating effectively homogenises investor perceptions, reduces volatility and increases liquidity. The effects are not so clear-cut when upgrading because, with the change to the rating being unanticipated, the effect of perception homogenisation is weaker and counterbalanced by the desire of some investors to profit from the improved credit quality to make speculative gains.
These studies shed new light on the question of the utility of rating agencies. The agencies effectively send information to investors, but perhaps not to all of them. Indeed, informed investors may outpace the agencies in the monitoring of the issuers’ credit quality. However, less informed investors need the opinion of the agency to be certain that the observed decrease in prices effectively corresponds to a downgrade in credit quality. The agency’s announcement removes all disparity of perception between investors and highlights the utility of the agency, which stabilises prices and increases liquidity. The dynamics of credit spreads cannot be studied separately from those of other marketable securities. After all, the debt world is not cut off from the equity world, something that we can easily understand through intuition alone. A fall in share prices is generally the result of operational difficulties leading to a reduction in operating cash flows and lower coverage of remuneration expenses and debt repayments. In parallel, this lower share value means an increase in financial leverage and, at a given volatility of the rate of return on assets, an increase in the volatility of the rate of return on equity. A reduction in the share price, an increase in financial leverage and a rise in the share price volatility and credit risk therefore all combine. From a theoretical point of view, Robert Merton was the first to express the credit spread as a function of the share price. We will not cover his work here. We will instead look into the credit-equity relationship as it is frequently used in the finance industry. Indeed, typically a power function denominated on growth rates is used for this purpose.
CDSt / CDSREF = [ SREF / St ] α
The credit spread growth rate, measured by the CDS, is therefore a function of the rate of decline of the share price modulo a power α that we assume to be positive, where REF serves as a basis for calculating the rate of change of the CDS and the share.
Knowledge of the parameter α makes it possible to specify this relationship fully. We first note that, as defined by the preceding equation, α is the opposite of the elasticity of the CDS value compared with the share value. By taking the logarithm of this equation, we get:
Ln [CDSt / CDSREF] = – α Ln [ St / SREF ]
α = – Ln
[CDSt / CDSREF] / Ln [ St
/ SREF ]
A ratio of two relative growth rates, the α parameter is indeed, to the nearest sign, the elasticity of the CDS value compared with the share value that we can also write as:
α = – [S/CDS] [δCDS / δS]
By expressing the derivative of the CDS value in relation to the share value [δCDS / δS], we are led to the following value of the α parameter:
α = 1 + l avec l = D/(S+D)
The debt and equity worlds are therefore closely related: an inverse relationship links credit spreads and share prices; this relationship is heavily dependent on the financial structure of the company and its leverage calculated in relation to the balance sheet total (S+D). The higher this leverage, the more any potential underperformance in share price will lead to significant increases in the credit spread.
From an empirical perspective, though this correlation appears relatively low when the markets are calm, it is very high when the markets are volatile. When the leverage is low, the graph representing the development of credit spreads (in ordinates) in relation to share prices highlights a relatively linear relationship close to horizontal; however, when leverage is much higher, a highly convex line appears.
With this relationship established, we can now question the sense behind it, or, if we prefer, ask what the lead market is. To do so, it is necessary to undertake credit market and equity market co-integration tests, and that the arbitrageurs will be responsible for making the long-term equilibrium in these two markets consistent.
To this end, two series of econometric tests are conducted symmetrically. The first series aims to explain the changes in share prices by those in CDS, whether delayed or not by several periods, and vice versa as regards changes in the value of CDS, in the latter case incorporating changes in financial leverage. These relationships, tested over the period 2008–2020 for 220 listed securities on the S&P 500 index, bring to light the following results:
– There are information channels between the listed equity segment and the CDS market. These information channels concern all businesses, no matter their sector or their level of debt: ‘informed’ traders, because of the existence of financial leverage, make decisions just as much on the equity market as on the credit market.
– In the majority of cases (two thirds of companies reviewed), the lead market is the equity market whose developments determine around 70% of the developments in the CDS market.
– However, in the case of companies with significant leverage, the price discovery process starts with the CDS market, which explains more than 50% of price variations. This empirical work is evidence, if any were needed, of the importance of the structural credit risk model proposed by the Nobel laureate Robert Merton in 1974.
References
Lovo, S., Raimbourg Ph., Salvadè F. (2022), ‘Credit Rating Agencies, Information Asymmetry, and US Bond Liquidity’, Journal of Business, Finance and Accounting, https://doi. org/10.1111/jbfa.12610
Zimmermann, P. (2015), ‘Revisiting the Credit-Equity Power Relationship’, The Journal of Fixed Income, 24, 3, 77-87.
Zimmermann, P. (2021), ‘The Role of the Leverage Effect in the Price Discovery Process of Credit Markets’, Journal of Economic Dynamics and Control, 122, 104033.
The recent legislative elections in France highlighted once again the discontent of the electorate in numerous countries with the development of their environment. Almost 60% of French voters listed on the electoral register made their choice… not to choose (non-voters, blank votes and spoilt votes) and almost 40% of those who did cast a ballot did so in favour of political organisations (parties or alliances) that are traditionally anti-establishment (the Rassemblement National and the Nouvelle Union Populaire, Ecologique et Sociale).
If we take a short-term focus, finding the reasons for this mix of pessimism and ill-humour—confirmed as it happens by a stark contraction of household confidence—proves to be quite simple. The net acceleration of consumer prices and the war at the European Union’s border, both phenomena being inherently linked, are obvious reasons. These upheavals, the gravity of which should not be underestimated (as we shall see), combine and indeed amplify a general disquiet that has been solidly in place for some time.
Without needlessly going too far back, we cannot fail to recognise that over the past 15 years the world has experienced an entire series of events that have contributed if not to a loss of our bearings, then to the questioning of the way we perceive the environment in which we are evolving.
Let us list some of these events, without looking to be exhaustive:
• the financial crisis (2008)
• the swing between the USA tending to retreat from global affairs and China, up to now, being more and more present (the new silk roads in 2015), with Europe in the middle trying to find itself (Brexit in 2016)
• the change in direction of US policy towards China (distrust and distance from 2017)
• Russia’s challenging of its neighbours’ borders (2008, 2014 and of course very recently in February this year)
• societies becoming more fragile (the Arab Spring in 2010–2011, the French Yellow Jackets in 2018, the assault on Congress in the USA in 2021, the Black Lives Matter movement in 2013, the refugee crisis in 2015, the Paris attacks in 2015)
• the rise of the environmental question (from the Fukushima accident in Japan in 2011 to a more complete realisation of climate change from 2018, with Greta Thunberg, among others)
• an economy that does not work for the benefit of all (the Panama Papers in 2016 on tax avoidance processes), against a contrasting backdrop of only passable, if not mediocre, performance at the macroeconomic scale but more dazzling performance at the microeconomic level (cf. GDP, and therefore revenues, vs the profits of listed companies)
• a pandemic crisis (COVID-19) highlighting the fragility of production chains that are too long and too complex (‘just in case’ taking over from ‘just in time’ but with what economic consequences?), not to mention the crisis linked to humanity’s abuse of Mother Nature
• the political and social question (the need to protect and share wealth)
It is on these already weakened foundations that the most recent events (inflation and the war, to put it simply) are being felt as potential vectors of rupture, similarly to potential catalysts of change that were until now latent. This rupture could take two forms.
First, and based on a deductive approach, there is the risk that geopolitical tectonic plates, to quote Pierre-Olivier Gourinchas, the new chief economist of the IMF, take shape, ‘fragmenting the global economy into distinct economic blocs with different ideologies, political systems, technology standards, cross border payment and trade systems, and reserve currencies’. The political landscape of the world would be drastically transformed, with the economic destabilisation that would result from it, at least in the beginning.
Then, and based on an empirical approach drawn from Applied History (the use of history to help benefit people in current and future times), there is the tempting parallel between the current situation and the situation that prevailed in the second half of the 1970s. At the time, the ingredients were episodes of war or regime change in the Middle East and a striking rise in oil prices. The consequence was twofold: the onset of spiralling inflation and a change of regulation, at the same time less interventionist and Keynesian and more liberal and ‘Friedmanian’: less systematic drive for budgetary activism, less regulation of the labour market, privatisation of public companies and more openness to external exchanges.
Let us delve into this second topic – or at least try. In the same way that correlation does not mean causation, parallel might mean trivial! By what path would comparable causes produce a change in the conduct of the economy, but in the opposite direction?
Is it not time to foresee a return to more voluntarist economic policies instead of prioritising laissez-faire economics? Yes, of course, but we must understand that this aspiration derives more from a reaction to a general context considered dysfunctional rather than from the search for an appropriate response to the beginning of snowballing prices.
Public opinion (or those who influence it) seems to show dissatisfaction, with the source of the problem behind it lying in the regulation in place today. This leads to an emphasis on an attitude that favours the alternative to the current logic: goodbye Friedman and hello Keynes, nice to see you again!
Nevertheless, beyond the causalities and their occasional loose ends to be tied up, the aspiration for a change in the administrat ion of the economy remains. The keywords might be the following: energy transition and inclusion. That means cooperation between countries (yes to competition in an open world, but not to strategic rivalry); reconciliation between public decision-makers, but also private ones, and the various other actors of economic and social life (the stakeholding); and the return to a ‘normal’ redistribution of wealth from the most to the least privileged.
To paraphrase Harvard University Professor Dani Rodrik, a globalised economic system cannot be the end and the political and social balances of each country the means; the logic must be put in the right order (a return in a way to the spirit of Bretton Woods).
In this way, at least we can hope, the global economic system will not fragment and inflation will be contained.
At least in the West, citizens and political leaders should align their aspirations and their efforts in this quest. Will businesses follow them? Will they not have something to lose, at least the largest of them?
We must of course raise the difficulty that may exist in reconciling the economic globalisation experienced over the past 30 years or so and the values that now prevail in society.
This requires adaptation, but without opposing the behaviour of the past and the aspirations—most certainly lasting—that have emerged. In the future (far ahead!), there will be no economic success in a world made inhospitable by the climate or by politics.
It is possible to ‘make some money’ occasionally by optimising customer, supplier and employee relationships, but taking a more long-term view, a ‘functional’ planet and ‘calm’ societ y are prerequisites.
Maybe we too easily tend to oppose market logic head-on to state and societal logic. Doubtless, it is more a question of positioning the cursor in the right place based on the changes observed or foreseen. That is where we are today; it is more about evolution than revolution!
Accuracy is pleased to announce that five of its experts have been listed as Thought Leaders in Who’s Who Legal’s Arbitration Expert Witnesses 2022 edition.
Through nominations from peers and clients, the following Accuracy experts have been recognised as the leading names in the field:
Who’s Who Legal identifies the foremost legal practitioners and consulting experts in business law based upon comprehensive, independent research. Entry into their guides is based solely on merit.
Accuracy’s forensic, litigation and arbitration experts combine technical skills in corporate finance, accounting, financial modelling, economics and market analysis with many years of forensic and transaction experience. We participate in different forms of dispute resolution, including arbitration, litigation and mediation. We also frequently assist in cases of actual or suspected fraud. Our expert teams operate on the following basis:
• An in-depth assessment of the situation; • An approach which values a transparent, detailed and well-argued presentation of the economic, financial or accounting issues at the heart of the case; • The work is carried out objectively with the intention to make it easier for the arbitrators to reach a decision; • Clear, robust written expert reports, including concise summaries and detailed backup; • A proven ability to present and defend our conclusions orally.
Our approach provides for a more comprehensive and richer response to the numerous challenges of a dispute. Additionally, our team includes delay and quantum experts, able to assess time related costs and quantify financial damages related to dispute cases on major construction projects.
Accuracy conducted sell-side financial due diligence for Five Arrows Principal Investments in the context of the sale of Hygie31 (Laf Santé) to Latour Capital and Bpifrance.
With the global economy in turmoil, energy prices are exploding. A general rising trend in the prices of oil and gas since mid-last year has been compounded in recent weeks by the war in Ukraine and the economic reprisals taken by the West against Russia. These reprisals now include a partial embargo on Russian hydrocarbons. But what does that mean in the short term? And how does the current situation compare with the oil crisis of the 1970s? Let us delve into the matter in this Economic Brief.
Gulf Cooperation Council (GCC) countries have implemented many monitoring, regulatory and legal initiatives to address the risks of criminal exploitation during the pandemic while dealing with ever-changing technologies and winning the trust of overseas investors. These initiatives have presented many challenges and opportunities for corporate investigators. This article discusses the details of some of the changes and their likely impact on regional investigations.
Discussion points
• Current regulatory and legal landscape in the GCC
• The impact of data privacy laws
• Stricter counterterrorism and anti-money laundering (AML) controls
• The impact of bankruptcy and insolvency laws
• The change in the cybercrime landscape
• The advent of cryptocurrency and the need for investigation
Referenced in this article
• GCC economic vision
• Data privacy laws
• AML and FATF
• Bankruptcy and insolvency laws
• Cybercrime laws
• Cryptocurrencies
• Crypto investigations
Introduction
The Gulf Cooperation Council, comprising the Kingdom of Saudi Arabia (KSA), the United Arab Emirates (UAE), Qatar, Bahrain, Oman and Kuwait, has seen strong economic growth over several decades. Most GCC countries are continuing to seek outside investment to support their ambitious development plans (e.g., Saudi Vision 2030, Dubai 2040 Urban Master Plan, Abu Dhabi 2030 Economic Vision, Qatar National Vision 2030 and Kuwait Vision 2035).
Although the GCC has managed sustained economic growth, the corporate investigations landscape has struggled for many years to keep up with the demands of companies faced with numerous risks owing to underdeveloped regulatory and legal frameworks in GCC countries. Those who wish to prey on individuals and corporations through fraud, cybercrime and misconduct have exploited the regulatory and legal gaps, and there is also significant regional exposure to sanctions-related issues and money laundering threats from organised crime.
Recognising these risks, GCC governments have worked to adapt both their regulatory and legal frameworks in recent years to make their economies more attractive to outside investors, including by investing heavily in initiatives to counter the threat of crimes and regulatory breaches and to reduce criminal activity. For example, authorities in the GCC have sought to modernise their regulatory regimes through, among other things, enhanced regulatory monitoring and harsher penalties relating to cybersecurity, digital identity, digital currencies, fintech, anti-money laundering (AML), data protection and privacy, and terrorist financing.
These initiatives are in addition to guidance issued in response to the covid-19 pandemic and increased international cooperation on transparency, extradition and money laundering targets. These modernisation efforts, although sometimes slow, have also seen the establishment of new regulators.
Data privacy laws
As of March 2022, five countries in the GCC have enacted new data privacy laws to strictly monitor and control the use of
personal data:
• KSA Royal Decree M/19 of 9/2/1443H;1
• UAE Federal Decree-Law No. 45 of 2021;2
• Qatar’s Data Protection Law No. 13 of 2016;
• Bahrain’s Law No. 30 2018;3
• Oman’s Royal Decree 6/2022;4 and
• Law No. 5 of 2020 of the Dubai International Finance Centre (DIFC).
The GCC countries join more than 130 jurisdictions with comprehensive privacy laws intended to safeguard individuals against the misuse of their personal data by organisations that receive or use such data. The GCC regulations bring regional laws in line with international standards, and there are strict penalties for the misuse of data or breaches of the law, with fines reaching up to US$800,000 in KSA and two years’ imprisonment for the misuse of sensitive data.5
These regulations potentially impact global organisations as the territorial scope encompasses any organisation that carries out processing activities about data subjects6 in the GCC, regardless of where they are established.
In this sense, the regulations are similar to the EU General Data Protection Regulation (GDPR), under which the authorities have issued more than 900 fines since its inception in 2018 across the European Economic Area and the United Kingdom, punishing organisations such as Amazon (US$877 million)7 and WhatsApp (US$255 million).8 Properly implemented and enforced, the GCC regulations could be similarly punitive to organisations that fail to prepare and change adequately.
The impact on corporate investigators is twofold: the first impact is when a breach is suspected and needs to be investigated and reported by the organisation. Many of the regulations require a reporting mechanism (typically through a commissioner or a data office). To respond to such a situation, organisations should work closely with their investigators and compliance officers to put in place and implement appropriate policies and procedures.
The second impact is how investigators gather and process information to pursue an investigation. Consideration must be given to receiving a data subject’s consent to handle the data or confirm that there is a lawful circumstance for its processing. In the context of an investigation that may include gathering and processing personal data, a lawful purpose could comprise any of the following:
• where the data subject has made the personal data public;
• protection of the interests of the data subject;
• being part of a judicial or security procedure; or
• medical purposes or matters of public health.
Terrorism and AML
The global community has made AML and combating the financing of terrorism (CFT) a priority. These efforts aim to guard the integrity of the international financial system, cut off the assets accessible to terrorists and make it harder for those engaged in wrongdoing to profit from their felonious activities.
Money laundering is secondary to a primary crime, such as corruption, drug trafficking, human trafficking, fraud and cybercrime. The original crime is called a predicate offence, and it is how bad actors acquire ‘dirty money’. Stopping money laundering can help stop primary offences and further help prevent the diversion of money away from financially productive uses. These diversions can have damaging impacts on businesses and the financial sector.9
The Financial Action Task Force (FATF) on money laundering, a 39-member intergovernmental body established by the 1989 G7 Summit in Paris,10 has primary responsibility for developing the global standards for AML and CFT (AML/CFT). It works in close cooperation with other key international organisations, including the IMF.
Certain GCC and neighbouring countries have sought the assistance of the FATF in assessing their AML regulatory regimes. Saudi Arabia, the UAE, Bahrain, Egypt and Jordan have completed the fourth round of mutual evaluations by the FATF, with Qatar currently going through the process. As of March 2022, the UAE, Jordan and Yemen are listed in the FATF’s grey list, meaning they are listed as high-risk countries, which can negatively impact investments.11
The UAE is taking steps to shed its reputation as a financial crime hotspot. In 2021, the UAE’s central bank fined 11 banks a total of US$12.5 million for having inadequate AML and sanctions controls at the end of 2019.12 It has also stepped up its AML/CFT enforcement efforts, with new extradition deals planned with several countries and several cross-border training operations.
Further, changes in the UAE’s legislation and the development of enforcement guidelines have advanced money laundering investigations and prosecutions.13 For instance, the UAE has updated key legal instruments, such as Federal Decree-Law No. 20 of 2018 on AML/CFT, which has been further enhanced and amended through Federal Decree No. 26 of 2021.
The UAE’s grey list placement initially led to increased investigations prior to the covid-19 pandemic, particularly regarding shareholder disputes as companies were more sensitive to the risk and therefore conducted more internal investigations. Some of this increase in investigations also resulted from, for example, the change in company law in the UAE14 and regulatory investigations in the pharmaceutical industry.
However, inquiries and reviews stalled as companies looked to control costs while having to rapidly revise policies and procedures as remote working became the norm. There has not yet been a spike in the number of investigations in the wake of the covid-19 pandemic; however, with global indicators showing a massive increase in fraud and corruption,15 it is highly likely that there will be an increase in investigations (along the lines of the exponential growth in investigations that occurred in the aftermath of the financial crisis in 2008).
Bankruptcy and insolvency regulations
GCC countries have also sought to become a more attractive home for investment by creating more modern, recognisable insolvency regimes that contain modern restructuring tools for businesses facing distress. The KSA, Bahrain, Oman, Kuwait and the UAE have either brought in new laws or updated existing laws to make them more investor-friendly and, in some cases, to decriminalise certain aspects related to personal insolvency. The World Bank sees these creditor rights and insolvency systems as being of key importance in providing investor confidence in these countries.16
Given these updated insolvency laws, liquidation is no longer the last resort for companies in those jurisdictions. As a result, companies are now conducting more internal investigations to understand if fraud or management errors may be leading the companies to insolvency or bankruptcy rather than just bad business practices or market pressure; in the past, companies and individuals ran the risk of imprisonment for non-payment of debts, which led to companies trying to delay liquidation.
As an example, one of the first companies to utilise the new KSA bankruptcy law in the past year was Ahmad Hamad Al Gosaibi & Brothers (AHAB) after a global dispute with Maan Al-Sanea and the Saad Group. Prior to the new law, AHAB had few options to restructure its debt other than to go into liquidation. This would likely have led to the break-up of the family partnership businesses (most of which were operating at a profit), the loss of all the partners’ personal assets and possible imprisonment for the partners.
In 2021, the KSA court ratified AHAB’s efforts to restructure US$7.5 billion of obligations with over 100 local and international financial institutions, thus bringing an end to a prolonged investigation and litigation that extended for more than 12 years.17 The applicable recent regulations are: • the UAE Bankruptcy Law No. 9 of 2016, which was later amended by Law No. 21 of 2020;
• the KSA Bankruptcy Law, introduced in 2018;
• the Bahrain Reorganisation and Bankruptcy Law No. 22/2018;
• Kuwait’s Law No 71. Of 2020; and
• Oman’s Royal Decree 53/2019.
Cybercrime laws and regulations
The global cost of cybercrime is expected to hit US$10 trillion in 2025, according to a 2021 cyberwarfare report by Cybersecurity Ventures.18 These figures showcase the enormity of the threat of cyber-attacks and breaches.
At the regulatory level, the most potent deterrents for this type of crime are strict regulations and penalties for using technology to commit or facilitate a crime, and several GCC countries have recently adopted laws in this space. For instance, the UAE’s latest Cybercrime Law19 addresses hacking, fake news, impersonation, internet bots and cryptocurrency and provides a framework for harsher penalties for breaches of the law.
The KSA’s Anti-Cybercrime Law of 2007,20 the Qatar Cybercrime Prevention Law,21 the Oman Cybercrime Law22 and Kuwait’s Combating Information Technology Crimes23 all address cybercrime to varying degrees, although they require updating to be in line with the latest technologies used to undertake cybercrime, such as the misuse of cryptocurrencies and non-fungible tokens.
As of April 2022, the DIFC and the Abu Dhabi Global Market have announced plans for the regulation of crypto assets and have already established that crypto exchanges will be regulated under these authorities going forward.24
With these new laws and regulations in place, criminals are moving to new methods of making profit. Many illegal gains are now obtained or laundered through deregulated cryptocurrencies. Cryptocurrencies pose unique challenges to investigators charged with identifying, tracing or seizing illicitly gained funds and assets.
Cryptocurrency
Blockchain-based cryptocurrencies allow individuals to engage in peer-to-peer financial transactions or enter into contracts as decentralised platforms. In either case, there is no need for trusted third-party intermediaries.
A cryptocurrency is generally defined as digital tokens or ‘coins’ on a distributed and decentralised ledger called a blockchain. Since the launch of bitcoin in 2008, different types of cryptocurrency have expanded dramatically.25 Bitcoin continues to lead the pack of cryptocurrencies in terms of market capitalisation, user base and popularity.
Other virtual currencies, such as Ethereum, are helping to create decentralised financial (DeFi) systems. Some ‘altcoins’ have features that bitcoin does not, such as handling more transactions per second or using different algorithms (e.g., proof of stake).26
Several cryptocurrencies have built-in privacy features or preferences that users can use for more private online commerce.
Troublesome trends
The two key ways in which criminals obtain cryptocurrency are:
• stealing the funds directly; or
• using a scam to trick individuals and organisations into parting with it.
In 2021, crypto criminals stole a record US$3.2 billion-worth of cryptocurrency, according to Chainalysis. That is a fivefold increase on the year before.
Scams continue to surpass outright theft, enabling criminals to swindle US$7.8 billion-worth of cryptocurrency from victims.27
There are several different theft-related trends that investigators should be concerned about. First, most scam-related thefts are ‘rug pull’ scams. Rug pull scams are a relatively new modus operandi in which the crypto criminals ‘pump’ the value of their coins before vanishing with the coffers, leaving their investors with zero-valued assets.28 These scams are not always illegal, but they are always unethical.29
Another new scam targets people online, with victims persuaded to invest in fake cryptocurrency schemes. The scam often combines romance fraud with crypto cons, as victims are promised a ‘happily ever after’ and big crypto gains. The cybercriminals operating this long con spend months gaining online daters’ trust, using romance and the lure of fast crypto returns to trick victims out of their savings. Once the crypto criminal has drained their victim, or when the victim realises they cannot withdraw any of the funds they believe they have invested in the scheme, the perpetrator will disappear.
These facts make crypto crime a fast-growing business, giving criminals an incentive to invest time and money to make money. The rise of the crypto economy and DeFi, coupled with record cryptocurrency prices in 2021,30 has provided criminals with profitable openings. Former US federal prosecutor Jessie Liu emphasised this point when she stated earlier this year: ‘The DOJ has seen cryptocurrency used to “professionalize” cybercrime because bad actors are using digital assets to purchase illicit services such as computer hackers or ransomware software.’31
Prosecutors, investigators and regulators are right to be concerned about these current trends and the impending ability for criminals to use cryptocurrency as part of their arsenal of tools to commit crimes. Buyers risk losing all their money invested in crypto assets and could fall prey to fraud. The European Union’s securities, banking and insurance watchdogs said: ‘Consumers face the very real possibility of losing all their invested money if they buy these assets.’32
Regulators are increasingly worried that more consumers are buying different crypto assets (17,000 by one count),33 including bitcoin and ether, which account for 60 per cent of the market, without being fully aware of the risks. They are also working hard to develop crypto asset regulations that will help make this type of investment safer for consumers. This initiative could herald more widespread adoption once markets in multiple jurisdictions recognise that it is possible to regulate crypto asset service providers and protect crypto asset investors.
Current status
In February 2022, the US Department of Justice (DOJ) declared a milestone seizure of 94,000 bitcoin estimated to be worth over US$3.6 billion – the DOJ’s largest-ever haul of cryptocurrency and the largest single financial seizure in the department’s history.34
Will there be more seizures of this magnitude? Crypto firms in times of financial adversity may receive requests to liquidate large sums of virtual currency as individuals and companies seek a safe (government-backed) refuge for their fortunes. Some exchange clients use cryptocurrency to invest in real estate, while others want businesses in countries such as the UAE to turn their virtual money into hard currency and store it away from harm’s way.
Dubai, the GCC’s financial and business centre and a growing crypto hub, has long been a magnet for the rich. This has also resulted in it being a destination for illicit money. As mentioned, this has resulted in the financial crime and money laundering watchdog, the FATF, putting the UAE on its grey list in March 2022 for increased monitoring.35 The UAE responded by asserting its commitment to strengthening AML/CFT efforts.36
Some businesses in the UAE are already accepting cryptocurrency payments following new laws to regulate virtual assets.37 The United Kingdom recently announced that it plans to make a cryptocurrency, stablecoins,38 a recognised form of payment. Other countries, including those in the GCC, will likely follow suit.
The growing focus on cryptocurrencies will likely lead to multiple attempts to seize such assets, which means seizing illicit funds and helping to prevent the underlying crimes.
Crypto-related crime may be at an all-time high, but legitimate cryptocurrency use far outstrips illegal use.
How much cryptocurrency are crypto criminals holding?
Nevertheless, there are legitimate questions relating to how extensive the use of cryptocurrency is in criminal enterprises. Although the answer is impossible to know, an estimate can be made based on the up-to-date list of known addresses that the likes of Chainalysis have identified as being associated with illicit activity.
As of early 2022, criminal addresses possess at least $10 billion-worth of cryptocurrency. The vast majority is held by wallets related to cryptocurrency theft. Addresses associated with darknet39 markets and scams also contribute to this number. Much of this figure comes not from the initial amount derived from criminal activity but from the ensuing value growth of the crypto assets.
In November 2021, the US Federal Bureau of Investigation (FBI) warned of an increase in bitcoin ATM scams.40
The FBI highlighted in an alert that it had seen a rise in scams that involved fraudsters directing victims to make payments using bitcoin ATMs and digital QR codes that were popularised during the pandemic. There are static versions of QR codes, meaning that once created, the QR code is permanent and will always bring users to that content as long as anyone can physically scan it with a smartphone. Static QR codes are best for one-time use because they cannot be edited or tracked. The FBI noted that it had seen a proliferation of fraud schemes involving payment through bitcoin ATMs, including scams related to online impersonation fraud and romance scams, which continue to develop. The latter is in today’s top five crypto scams, as reported in March 2022 by US News.41
There are bitcoin ATMs in the UAE and the KSA that service many cryptocurrencies, potentially making these scams a key regional consideration.
Moving forward
Blockchain analysis and computer forensics are not stand-alone offerings: several layers of association are needed to identify bad actors.
Initial successes in pursuing crypto crimes have been because of new regulations and the narrowing of know-your-customer standards among entities that deal with traditional currencies. Converting traditional currency to cryptocurrency dramatically dilutes the anonymity of crypto wallets as identification is required at the point of entry. There are also other sources of intelligence and evidence, such as forensically gathering data from seized mobile phones and computers.
Understanding of the blockchain, with its in-built cryptography, the ability to carve addresses from electronic media and the extraction of private keys from wallets, is not typically found among financial investigators. Digital forensic analysts have a different skill set that is more appropriate; however, they may not necessarily understand financial matters associated with money laundering and fraud. This poses the question of whether hybrid crypto investigators are needed.
Regional investigators and stakeholders must develop tools to ensure that interested parties can request GCC authorities to seize digital assets held by cryptocurrency exchanges without issuing mutual legal assistance treaty (MLAT) requests. The seizures will be vital to keep up with the speed of cryptocurrency investigations since MLAT requests (e.g., those agreed between the UAE and the United States) are usually lengthy, and cryptocurrency moves almost instantaneously.42
One certainty about the future is that any new cryptocurrencies that start to gain traction among clientele, in particular criminals, need to be understood by investigators, where possible, before they form part of an investigation.
Investigative challenges
The particular features of virtual currency systems operating on significantly DeFi systems present new challenges for investigators, both globally and in the GCC. Many of the benefits that cryptocurrency systems promise legitimate consumers, such as increased privacy in transactions and the ability to send funds without an intermediary, serve as obstacles to investigators when the systems are exploited for illegal purposes.
Key challenges identified by investigators dealing with cryptocurrency include regulatory and compliance disparities, transaction obfuscation and anonymity, and the global nature of the systems.
Investigators must standardise and constantly review cybercrime investigative techniques in digital investigations involving DeFi virtual currencies. They may have difficulty getting the information necessary to trace the transaction, especially if the victim uses a wallet service provider or exchanger in an uncooperative foreign jurisdiction or a privacy-orientated cryptocurrency.
Conclusion
GCC countries are seeking to create regulatory regimes covering data privacy, AML/CFT and cybercrime that match the complex environment in which the companies operating in those countries find themselves. The changes to these regimes create both challenges and opportunities for corporate investigators.
The heightened use of cryptocurrency by both genuine investors and criminals illustrates the challenges that both corporate and government investigators will face in this evolving landscape. Investigators must stay up to date or bring in the expertise required to future-proof their effectiveness.
Notes 1 Published in the Official Gazette of September 2021. 2 ‘Overview of UAE’s Federal Decree-Law No. (45) of 2021 on Personal Data Protection (PDPL)’, Securiti (2021). 3 Law No. 30 of 2018 with respect to the Personal Data Protection Law. 4 Royal Decree 6/2022 promulgating the Personal Data Protection Law, published in Official Gazette No. 1429. 5 ‘Global Data Privacy & Security Handbook – Saudi Arabia’, Baker McKenzie (23 January 2020). 6 A data subject is a natural person who can be identified directly or indirectly by specific information (personal data). 7 Sam Shead, ‘Amazon hit with $887 million fine by European privacy watchdog’, CNBC (30 July 2021). 8 Conor Humphries, ‘WhatsApp fined a record 225 mln euro by Ireland over privacy’, Reuters (2 September 2021). 9 The negative consequences of these financial wrongdoings have resulted in the International Monetary Fund (IMF) being very active for over ten years in the anti-money laundering (AML) and combating the financing of terrorism (CFT) arenas. The IMF’s unique blend of global membership, surveillance capabilities and financial sector expertise makes it a central and crucial element of international AML and CFT efforts. 10 FATF, ‘History of the FATF’. 11 FATF, ‘Jurisdictions under Increased Monitoring – March 2022’. 12 John Basquill, ‘UAE threatens anti-money laundering crackdown as 11 banks fined’, Global Trade Review (3 February 2021). 13 Rola Alghoul, ‘UAE releases 4th issue of Al Manara: Anti-Financial Crime Newsletter of UAE Emirates News Agency’, Emirates News Agency (15 February 2022). 14 United Arab Emirates government portal, ‘Full foreign ownership of commercial companies’. 15 ‘Global Fraud Trends: Device Insights Highlight Increased Threats Since Onset of Pandemic’, TransUnion (22 March 2021). 16 ‘Principles for Effective Insolvency and Creditor/Debtor Regimes, 2021 Edition’, World Bank. 17 Matthew Martin, ‘Saudi Conglomerate’s $7.5 Billion Default Is Finally Settled’, Bloomberg (15 September 2021). 18 Steve Morgan, ‘Cybercrime To Cost The World $10.5 Trillion Annually By 2025’, Cybercrime Magazine (13 November 2020). 19 ‘Joint statement on the UAE’s adoption of Federal Decree Law No. 34 of 2021 on Combatting Rumours and Cybercrime’, ADHRB (24 January 2022). 20 Kingdom of Saudi Arabia Bureau of Experts at the Council of Ministers, Anti-Cybercrime Law, Royal Decree No. M/17 of 26 March 2007. 21 Nabeela, ‘Cyber crimes in Qatar: The law and how to report them’, ilovequatar.net (29 April 2020). 22 Alice Gravenor, ‘Oman: Latest developments in data protection and cybersecurity’, DataGuidance (September 2020). 23 Council of Europe, ‘Kuwait, Cybercrime Legislation’ (15 April 2020). 24 Felicity Glover, ‘DFSA publishes regulatory framework to oversee cryptocurrencies’, The National (10 March 2022). 25 Taylor Locke, ‘Bitcoin launched 13 years ago this month — here are 8 milestones from the past year’, CNBC Make It (3 January 2022). 26 A proof of stake consensus algorithm is a set of rules governing a blockchain network and the creation of its native coin. 27 ‘Crypto Crime Trends for 2022: Illicit Transaction Activity Reaches All-Time High in Value, All-Time Low in Share of All Cryptocurrency Activity’, Chainalysis (6 January 2022). 28 US Attorney’s Office press release, ‘Two Defendants Charged In Non-Fungible Token (“NFT”) Fraud And Money Laundering Scheme’ (24 March 2022). 29 ‘Crypto rug pulls: What is a rug pull in crypto and 6 ways to spot it’, Crypto News (6 February 2022), 30 Niccolo Conte, ‘This is how the top cryptocurrencies performed in 2021’, World Economic Forum (26 January 2022). 31 Sam Fry, ‘Former money laundering prosecutors predict aggressive US crypto seizures’, Global Investigations Review (3 March 2022). 32 ‘Be ready to lose all your money in crypto, EU regulators warn’, Reuters (18 March 2022). 33 Megan DeMatteo, ‘There Are Thousands of Different Altcoins. Here’s Why Crypto Investors Should Pass on Most of Them’, NextAdvisor (18 April 2022). 34 Deborah R Meshulam, Katrina A Hausfeld, Michael T Boardman, Jonathan M Kinney and Evan North, ‘US Department of Justice, aided by cryptocurrency exchanges, seizes over US$3.6 billion in stolen Bitcoin’, DLA Piper (15 February 2022). 35 Lisa Barrington, ‘Financial crime watchdog adds UAE to “grey” money laundering watch list’, Reuters (4 March 2022). 36 Lina ibrahim and Tariq Alfaham, ‘UAE affirms commitment to strengthening AML/CFT efforts following FATF decision’, Emirates News Agency (4 March 2022). 37 Ian Oxborrow, ‘Dubai school says it is first in Middle East to accept cryptocurrencies for fee payments’, The National (22 March 2022). 38 GOV.UK, ‘Government sets out plan to make UK a global cryptoasset technology hub’ (4 April 2022). 39 The darknet refers to networks that are not indexed by search engines such as Google. These are networks that are only available to a select group of people and not to the internet public, and are only accessible via specific software. 40 US Federal Bureau of Investigation public service announcement, ‘The FBI Warns of Fraudulent Schemes Leveraging Cryptocurrency ATMs and QR Codes to Facilitate Payment’ (4 November 2021). 41 John Divine, ‘5 Top Crypto Scams to Watch in 2022’, US News (22 March 2022). 42 See footnote 31.
The world economy is experiencing shock after shock arising from various sources: the COVID-19 pandemic, supply chain shortages, the war in Ukraine, severe inflationary conditions and stalling growth. However, in this edition of the Economic Brief, we will delve into the real estate sector to consider just how the economy is faring; after all, as the French saying goes, ‘when real estate is doing well, everything is’. 1
WHAT IS CLIMATE RISK AND WHY DOES IT MATTER FOR FINANCIAL INSTITUTIONS?
Climate risk has been a hot topic – if not the hottest – all over the globe in the past few years. It refers to the possible negative impacts that climate change and the transition to a low-carbon economy may have on the economy and society at large. On one hand, physical risk such as more varied temperatures, more frequent and intense floods, wildfires, droughts, storms and other extreme weather events, rising sea levels, loss of biodiversity and more are bringing potential adverse effects on lives, health, infrastructure and financial and economic assets. On the other hand, even if society wants to move to a low-carbon economy, some industries may see significant changes in asset values or operating costs. The speed with which the shift takes place is also a concern, since the transition could be costly for some companies.
Following the rise in global awareness of climate issues, various international conferences have taken place and several international commitments have been made to address climate change. The financial services industry is certainly at the forefront of this transition due to its systemic importance. Globally, central banks and regulators are demonstrating their awareness and commitment to tackling climate change by issuing various guidelines, protocols and frameworks, and financial institutions are taking actions to adhere to the new rules.
Managing climate risk is a relatively new field and could prove to be complex and challenging. In this article, we will discuss how financial institutions can tackle climate risk effectively.
TYPES OF CLIMATE RISK
Climate risk is a very broad concept, and the associated risks were not well defined until the Bank of England established three categories of climate change risk in 2015. Though the definition and understanding of climate risks may vary across jurisdictions, the three types of climate risk below are the most commonly used.
Table 1 – Three types of climate risk
CLIMATE REGULATION FOR FINANCIAL INSTITUTIONS WORLDWIDE
Central banks and regulators worldwide are establishing climate-related bank regulations or guidelines. The new rules often include stress tests, mandatory risk disclosures, supervision of the risk management of financial institutions and potentially the introduction of additional capital requirements for banks. The regulations are expected to change over time and banks will be engaged continuously to understand and mitigate the risks posed by climate change to the sector.
In 2019, the Bank of England became the first central bank to issue climate risk supervisory expectations. Since then, regulators in most developed regions have followed suit. Some of the major climate regulation milestones and stress testing are indicated in the timeline below.
Figure 1: Major climate-related regulatory and stress testing milestones
Source: Accuracy analysis
CLIMATE RISK MANAGEMENT PILLARS
Climate risk management refers to the approach of making climate-sensitive decisions. The approach seeks to promote sustainable development by reducing vulnerabilities to climate risk. For financial institutions, the guidance comprises four pillars in general.
Figure 2: Four pillars of climate risk management
Source: Accuracy analysis
AN ILLUSTRATIVE FULL-SCALE IMPLEMENTATION FRAMEWORK
With all this in mind, we recommend that an implementation programme for climate risk management should cover the elements as we illustrate in Figure 3 below. This can be broken down into three phases of work: (1) planning and portfolio review, (2) solution implementation and (3) policies, procedures and risk culture.
Figure 3: Full-scale implementation framework
Source: Accuracy
In the following sections, we will introduce certain critical pillars in the implementation of climate risk management and demonstrate the relevant tools that will make implementation smooth and cost-effective.
MANAGEMENT QUALITY ASSESSMENT
Today, companies often have “green” policies in place to help tackle climate change. Sometimes, part of the management team’s remuneration package is determined by the success of the green initiatives. However, corporate executives often have doubts over whether their companies are doing enough. They are also often unsure of the next steps. Therefore, one of the key elements in this phase is to assess management awareness and quality.
The Transition Pathway Initiative (TPI) established a widely accepted framework called the Management Quality Framework. The framework tracks the progress of companies in tackling climate change through the following five levels:
Level 0 – Unaware of climate change as a business issue
Level 1 – Acknowledge climate change as a business issue
Level 2 – Building basic capacity (management system, processes and reporting)
Level 3 – Integration into operational decision-making
Level 4 – Strategic assessment
Banks should assess how their corporate clients are addressing climate risks from governance and strategy perspectives. As shown in Figure 4, a well-designed questionnaire can be used to capture such information from clients.
Figure 4: Template for TPI management quality framework – questionnaire and indicators
Source: Accuracy
GHG PROTOCOL AND PCAF STANDARD
For Phase 2, we start to perform quantitative and qualitative examination. Similar to IFRS or GAAP in the financial accounting world, there are generally accepted standards for measuring greenhouse gas emissions; the Greenhouse Gas (GHG) Protocol is leading provider. Developed under the partnership between the World Resources Institute (WRI) and the World Business Council for Sustainable Development (WBCSD), the GHG Protocol establishes a standardised framework to measure and monitor GHG emissions, aiming to enhance measurement and monitor reliability, accuracy and comparability across companies, industries and countries.
The protocol accounts for all six greenhouse gases identified in the Kyoto Protocol, including CO2, CH4, N2O, HFCs, PHCs and SF6. These emissions can be classified into three scopes.
Scope 1. All direct emissions from owned or controlled sources (combusted on-site). Common types of Scope 1 activities include stationary combustion (fuel consumption at a facility), mobile combustion (e.g. vehicles) and refrigerants (e.g. from air conditioning).
Scope 2. Indirect emissions from purchased energy from utilities (combusted off-site). Specifically, Scope 2 activities include both purchased electricity (calculation approach can be either market-based or location-based) and purchased heat and steam.
Scope 3. Indirect emissions occurring in the supply chain. These activities can be grouped into eight upstream activities (purchased goods and services, capital goods, fuel and energy-related activities, transportation and distribution, waste generated in operations, business travel, employee commuting and leased assets) and seven downstream activities (transportation and distribution, processing of sold products, use of sold products, end-of-life treatment of sold products, leased assets, franchises and investment).
In terms of calculation, the general approach is
The activity refers to the level of emission activity (e.g. tonnes of fuel consumed) and the emission factor is a factor to convert activity data into emission data (e.g. kg of CO2e / tonnes of fuel burnt). It is worth noting that although there are six types of GHG, all emissions are converted into CO2e for better comparability. To facilitate the calculation, the GHG Protocol has developed corresponding Excel tools that can be customised for implementation.
Like other corporates, financial institutions generate Scope 1, 2 and 3 emissions in their daily operations. However, they should place further attention on their Scope 3 emissions, notably in relation to investment activities (i.e. GHG emissions financed by their loans and investments).
The Partnership for Carbon Accounting Financials (PCAF), an open collaboration of banks, has established a global GHG accounting standard. The standard aims to reduce inconsistencies in carbon accounting methods, allocate the emissions of companies to financial institutions fairly based on their share of the financing and help the financial sector facilitate a transition to decarbonisation. It provides a framework for measuring and disclosing emissions from six major asset classes, including listed equity and corporate bonds, business loans and unlisted equity, project finance, commercial real estate, mortgages and motor vehicle loans. The asset classes defined by PCAF are based on financing types and sources (i.e. corporate finance, project finance and consumer finance), use of proceeds (i.e. known or unknown as defined by the GHG Protocol) and activity sector (e.g. all sectors, real estate, motor vehicle).
In terms of calculation, the general approach is set out below.
Banks should develop templates based on the PCAF standard in order to calculate their Scope 3 emissions.
Figure 5: Accuracy template for Scope 3 emissions (investment activities)
Source: Accuracy
NET ZERO AND TARGET SETTING
With an objective to achieve net-zero emissions by 2050 and limit global warming to 1.5°C, the Science Based Targets initiative (SBTi), a partnership between CDP, the United Nations Global Compact, World Resources Institute (WRI) and the World Wide Fund for Nature (WWF), has established tools for organisations to set science-based targets for emission reduction. There are three key technical pieces in the target-setting processes – carbon budget, emissions and allocation approach.
Figure 6: Three key pieces in the target setting processes
Source: Accuracy
Two of the common target-setting approaches are the absolute contraction approach and the sectoral decarbonisation approach.
Absolute contraction approach: this approach applies to all sectors excluding power generation and oil & gas. It assumes a linear annual reduction rate based on IPCC carbon budget scenarios (i.e. 4.2% for the 1.5°C goal and 2.5% for the well-below 2°C goal). Using these guidelines, companies should aim for a decarbonisation rate that takes into account the number of years since the base year emissions. For example, if a company sets its base year as 2020 and uses the 1.5°C scenario, the GHG emission target should be a 42% emission reduction from the base year level by 2030 (4.2% multiplied by 10 years).
Sectoral Decarbonization approach (SDA): this approach is sector-specific; however, not all sectors have relevant SDA tools available. It is based on the idea of “intensity convergence”, which assumes that the carbon intensity of an individual company converges with the homogeneous sector’s carbon intensity by 2050. The target percentage of reduction differs based on sectoral IEA carbon budgets.
As financial institutions play a key role in climate risk management, there are specific target-setting approaches for them. It should be noted that financial institutions’ science-based targets should cover Scope 1, 2, and 3 emissions.
For Scope 3 category 15 (investment activities), based on the temperature scoring method developed by the CDP and WWF, SBTi has established two approaches: SBT portfolio coverage and SBT temperature scoring. These approaches enable financial institutions to align their investment and lending portfolios with the emission targets. The applicable target-setting methods depend on the type of asset class.
Figure 7: Applicable target-setting methods for selected asset classes
Source: Accuracy analysis
When using the portfolio coverage method, financial institutions commit to engaging with their investees to set their own approved science-based targets (SBT). On a portfolio basis, the financial institution should aim to achieve 100% SBT portfolio coverage by 2040 linearly. The 2040 timeline is set to allow companies enough time to implement their targets to achieve net zero by 2050.
When using the temperature scoring method, financial institutions determine the temperature score of their investment portfolio based on the available GHG emission reduction targets of their investees. As companies may have multiple climate targets, transformation is performed to convert targets into temperature scores for both Scopes 1+2 and Scope 3 over three time frames – short (targets shorter than 5 years), medium (5–15 years) and long (over 15 years) terms. Specifically, each target is mapped to a regression model based on target type, the company’s sector (ISIC), the intensity metric and the scope. The portfolio level scores are generated based on various weighting options, such as weighted average temperature score (WATS), total emissions weighted temperature score (TETS) and total assets emissions weighted temperature score (AOTS). Then, financial institutions analyse ways to improve a portfolio’s temperature score, such as analysing the hotspots or performing what-if analyses. Ultimately, financial institutions should determine relevant and practical actions to achieve the long-term targets.
Figure 8: Temperature scoring method – key steps
Source: Accuracy analysis
To facilitate target setting, SBTi has developed Excel tools and an open-source Python library for target setting under various methods. For example, Figure 9 shows a target-setting tool specific to commercial real estate and residential mortgages based on the SDA. Figure 10 presents part of the open-source coding templates of the temperature scoring approach.
Figure 9: SDA tool for commercial real estate and residential mortgages
Source: SBTi, Accuracy analysis
Figure 10: Excerpt from open-source coding templates of temperature scoring approach
Source: SBTi, Accuracy analysis
CLIMATE RISK STRESS TESTING
Another essential quantitative assessment for financial institutions regarding climate risk is stress testing. A stress test corresponds to a “what if” analysis, where scenarios that would cause shocks to banks are used as inputs. A general operation of the climate risk stress test framework is illustrated in the below figure.
Figure 11: Climate risk stress test framework
Source: Accuracy
Adopting the tools and data provided by several worldwide organisations related to climate risk, such as the Network for Greening the Financial System (NGFS), we have developed several dedicated stress test models that are fit for regulatory purposes. Overall, stress testing can be either a top–down analysis or a bottom–up analysis. We will discuss more on the bottom–up analysis in the next section.
The key elements of a top–down approach are set out below:
1. Identify scenarios for stress testing based on the NGFS scenarios
2. Forecast key climate and macroeconomic indicators under each scenario
3. Develop the relationship between projected indicators and company profitability and leverage and estimate the impact of chronic physical risk
4. Translate the relationship to changes in the probability of default (PD).
Identify scenarios for stress testing based on the NGFS scenarios: Each NGFS scenario looks at a distinct set of assumptions about how climate policy, emissions and temperatures will change over time. For an orderly transition, the two common scenarios are (1) net zero 2050 (limits global warming to 1.5°C and reaches global net zero CO2 emissions by 2050 through stringent climate policies and innovation) and (2) below 2°C (limits global warming to below 2°C through gradual increases in stringency of climate policies). For a disorderly transition, the two common scenarios are (1) divergent net zero (reaches net zero around 2050 but with higher costs due to divergent policies introduced across sectors) and (2) delayed transition (assumes that annual emissions do not decrease until 2030 followed by strong climate policies to limit warming to below 2°C). For a hothouse world, the two common scenarios are (1) nationally determined contributions (NDC, including all pledged policies even if not yet implemented) and (2) current policies (assumes that only currently implemented policies are preserved, leading to high physical risks).
Figure 12: Major NGFS scenarios
Source: NGFS, Accuracy analysis
Forecast key climate and macroeconomic indicators under each scenario: Integrated Assessment Models (IAMs) assist in the generation of key climate and macroeconomic variables based on several NGFS scenarios. The outputs are transition trajectories through time and across different regions/countries, based on various scenarios. Projected carbon prices, GHG emission levels, secondary energy prices, temperature rises and other factors are all taken into account. Aside from transition risk, the outputs of IAMs are also used as inputs into other macro-econometric models (such as the PIK model) to predict the degree of physical chronic risk.
Develop the relationship between projected indicators and company profitability and leverage and estimate the impact of chronic physical risk: With the projected climate and macroeconomic indicators, we can explore how these macro indicators affect company profitability and leverage ratios.
Let us take the imposition of carbon pricing policies as an example. In most cases, this policy will result in greater operational expenses on a company’s income statement. The linkage is due to (1) extra costs of carbon prices paid on direct emissions and (2) higher indirect expenses due to higher energy (i.e. utility) invoices. As a result, the profit margin is squeezed and the overall profitability of the company suffers. This will increase the likelihood of default.
Another linkage relates to capital expenditure. Companies may need to invest in new machinery and production technologies in order to meet the carbon emission targets. They may therefore need to issue additional debt to fund this climate-related capital expenditure. As a result, the company’s debt-to-equity ratio will rise, raising the risk of default.
The likelihood of a corporate default is also influenced by chronic physical risks. Temperature rises, for example, would reduce economic productivity. As a result, expected yearly GDP growth rates could become slower or possibly reverse. This deterioration in the macroeconomic environment assumption also increases the risk of default.
In the following example, we illustrate the analytics framework that was adopted by the Hong Kong Monetary Authority (HKMA) in a research paper1published in March 2022. This framework is consistent with what we have described above.
Figure 13: Illustrative relationship between climate-change policies and probability of default
Source: NGFS, Accuracy analysis
Translate the relationship to changes in the probability of default (PD): Multiple approaches can be employed to estimate the extent of changes in the probability of default. In the HKMA example, the regulator has adopted a set of regression formulae to estimate the impacts on PD.
In the following equations, the subscript or superscript 𝑖 denotes the firm, 𝑡 denotes the time (year) and 𝑠 denotes the scenario.
Profitability – defined as earnings over total assets:
Earnings – revenues minus operating expenses:
Revenue growth rate – captures how differences in the growth rate of total assets from climate policies may affect firms:
Operating expenses growth rate – captures what proportion of the increase in revenues is translated to operating expenses:
Total assets growth rate – captures how differences in GDP growth rates from climate policies affect the growth rates of total assets for firms:
Leverage – defined as total debt of firms over total assets:
Output gap – defined as the difference between the actual output of an economy and its potential output divided by its potential output:
Probability of default:
This framework is logically sound and pragmatic. In practice, we recommend that banks consider this as a starting point, but customisation using the bank data will be necessary. For example, the framework is entirely top–down in nature and does not incorporate a bottom–up analysis; we therefore need to perform single counterparty analysis to estimate PDs for major exposures before generalising using regression.
SINGLE COUNTERPARTY ANALYSIS
For single counterparty analysis on PDs, it is typical to adopt the Merton model. The model is structural in nature and is designed to assess the probability of default for single-name counterparties whose balance sheet items are available and can be projected.
The Merton model takes into account balance sheet components (e.g. equity, short-term debt and long-term debt), asset return volatility and the risk-free rate for calculations. This analysis is calculated bottom–up.
The steps for conducting single counterparty analysis using the Merton model are as follows:
1. Identify top counterparties by exposure (e.g. the top 30 counterparties that cover 70% of the portfolio’s total exposure).
2. Analyse the counterparties’ balance sheet items (e.g. sales volume, unit price of products, capital expenditure, impairments) based on different climate change scenarios. Such analyses should take into account both projected macroeconomic and sectoral indicators from the top–down analysis and analysis of idiosyncratic risk (e.g. the adaptability of the counterparty to climate change based on TPI management quality review).
3. Estimate the inputs for the Merton model (i.e. equity amount, debt level, asset volatility and the risk-free rate). In particular, asset volatility and the risk-free rate can be estimated or obtained from external data providers.
4. Perform single counterparty calculation on PD:
Where N-1 is the standard normal inverse function
Figure 14: Merton Model
Source: Accuracy analysis
TCFD REPORTING
Another important aspect in climate risk management is governance, especially the need for climate-related disclosure. The Task Force on Climate-related Financial Disclosures (TCFD) published a set of recommendations in 2017 and further enhanced them in 2021 to help businesses disclose risks and opportunities arising from climate change.
TCFD has established 11 recommendations surrounding four thematic areas (i.e. governance, strategy, risk management and metrics and targets). The key requirements are as follows:
With an ever greater number of large financial institutions starting to publish their TCFD reports over the past few years, disclosures probably represent the least challenging part of climate risk management, as many good references and templates are already available.
Accuracy is pleased to announce that we have advised the Caisse de Dépôt et Placement du Québec and the Fonds de Solidarité FTQ in their exclusive negotiations of the acquisition of 65% of Bonduelle Americas Long Life.
Accuracy has been assigned by SPAC Pegasus Entrepreneurs to perform the
financial due diligence in the context of the following operation:
FL Entertainment (composed of Banijay
Group and Betclic Everest Group) is merging with SPAC Pegasus Entrepreneurs,
which has been backed by European investment firm Tikehau Capital and
Financière Agache.
Transaction gives implied pro forma equity value of
€4.1 billion and pro forma enterprise value of €7.2 billion – the largest
business combination by a European-listed SPAC
The Banijay Group is the world’s largest independent content production
company. The Betclic Everest Group operates in the online sports betting and
gaming segment.
Over the past few years, the banking industry has witnessed a new wave of digital transformation. Virtual banking, for example, has become more popular in many regions. Other digitalisation trends such as open banking, RegTech, AI and data-driven decision-making, to name a few, are in the headlines.
In addition, the Covid-19 pandemic is changing the way that banks and customers interact. Today, retail banking products are very much commoditised, with interest rates and other features of bank offers proving similar between them. FinTechs and TechFins have therefore emerged to bring new and better customer experiences to their incumbent counterparts. Customer expectations have also been changing, and customers are seeking digital, seamless, fast and integrated services more and more. Thus, on the other side of the table, banks are not left with much choice but to undertake necessary digital transformations to meet expectations.
Many retail banking transformations are taking place in the market. In the broad sense, we can categorise them into three types: (1) moving from product-centric to customer-centric (i.e. to have more and faster customer interactions, to offer more personalised services and advice, etc.), (2) automating end-to-end services (i.e. adoption of technology for on-boarding, e-KYC, risk management, internal controls, etc.), and (3) enabling big data analytics and data-driven business decision-making.
In this article, we focus on the second and the third categories. In particular, we dedicate this article to the discussion of credit scorecards, as one of the major tools for data-driven risk management and business decision-making. We will review traditional scorecard development methodologies and then discuss the latest trends.
2. Key makes it possible to unlock value
Retail banks typically use credit scorecards, which are mathematical models, to predict the behaviours of their customers. The most important behaviour to predict is whether the customers will default on or repay their borrowings. When it comes to such predictions, two types of scorecard are widely used: the application scorecard and the behavioural scorecard.
Table 1 – Comparison between application and behavioural scorecards
3. Traditional scorecard development framework
There are a number of tools that can be used for the development of retail credit scorecards. Historically, SAS was arguably the dominant programming language for retail credit risk management, including the development of credit scorecards. Over the past decade, open source programming languages, such as Python and R, have become more and more popular. While most banks are still using SAS now, many have started using open source languages in parallel.
Traditionally, a six-phase framework is adopted for credit scorecard development. As demonstrated in figure 1 below, the six phases are (1) data processing, (2) variable transformation and selection, (3) logistic regression, (4) performance inference, (5) scorecard scaling and (6) scorecard validation. Refer to appendix 1 for a more detailed discussion regarding the development procedures.
Figure 1 – Six-phase credit scorecard development processes
Source: Accuracy
4. Challenges in traditional credit scoring
Traditional credit scorecards have been used by market practitioners for a few decades. However, they are not perfect when considered through the lens of big data. Below we highlight the major challenges faced by traditional credit scorecards today.
Figure 2 – Key challenges faced by traditional credit scoring
5. Key enablers to unlock value
We have already briefly mentioned the trends to tackle the challenges encountered by traditional credit scoring. These are certainly at the heart of retail credit decisioning in the era of big data analytics.
Figure 3 – Key trends in retail credit scoring
Source: Accuracy
Trend one: Big data analytics and the use of alternative data
Looking at retail banking globally, we are seeing a strong focus on improving data and deeply understanding customer needs to create personalised experiences. Big data analytics and the use of alternative data have become one of the most prevalent trends in the industry’s transformation. With rising computing power and increasing access to advanced analytics tools, market practitioners are starting to realise the hidden value of data as well as to search for new data sources.
Retail banking has long been a data-driven business, where data is generated at every stage of the customer journey. However, historically, most banks did not have an efficient way to realise the potential of the data nor the IT infrastructure necessary to do so. Furthermore, traditional data as used in the past is just the tip of the iceberg; huge amounts of alternative data, in either structured or unstructured forms, are generated every second from various data sources, both internally and externally, in this digital era.
Over the past decade, thanks to advances in big data analytics, retail banks now have increasing capabilities to process traditional and alternative data efficiently; thus, they are able to build up the customer’s 360-degree profile digitally. With that in mind, banks are starting to provide a more tailor-made customer experience via their banking apps and digital platforms. In addition, upselling and cross-selling campaigns can now target specific customer segments based on insights from big data analytics. Developments in AI and machine learning also help banks and data providers to gain insights from unstructured data (e.g. using nature language processing (NLP) to gauge a customer’s sentiments).
Figure 4 – Traditional data and alternative data comparison
Source: Accuracy
The use of alternative data not only improves the robustness of the scorecard model but also enables banks to assess the creditworthiness of untapped customer segments. This helps to extend financial services to the two billion unbanked adults globally.
With alternative data, retail banks are also able to develop more scorecards, such as income scores, propensity scores and marketing scores. These further help banks decide to whom to lend their money, how much to lend, in what time frame and through what channels.
Some FinTech firms and digital financial service providers have taken the initiative to make use of alternative data sources for credit scoring. Credit bureaus, such as Experian, can now add rent payment history to their credit scoring algorithms thanks to a tool developed by the UK PropTech firm CreditLadder. Lenddo,a software business in Singapore, has incorporated social media and mobile phone data to assess clients’ credit levels. By aggregating data from SMS footprints, electronic devices, emails and credit bureau reports, among others, Algo360, an alternative credit score solution provider, helps new-to-credit customers get loans. Small FinTech companies have used smartphone activity, including calls, GPS data and contact information, to execute credit scoring in microfinance. As alternative data accumulates, the output from predictive models is likely to become more reliable and accurate over time.
Alternative data is not only beneficial when credit scoring individuals, but also in the case of SMEs. Banks commonly consider SMEs to be high-risk clients since information about them is limited, causing difficulty in evaluating their creditworthiness. Because of the intrinsic qualities mentioned previously, alternative data, in conjunction with traditional data sources, will help to build a more comprehensive profile of SMEs, allowing lenders to make better decisions. Digital SME lenders (e.g. Kabbage, an Atlanta-based FinTech company) are making wide use of alternative data such as bank account money flows and balances, business accounting, social media, real-time sales, payments, trading, logistics, and credit reporting service provider data, as well as various other private and public sources of data, to improve risk assessment and to tap into a large market of underserved SMEs.
Moreover, the value of data can be further ‘mined’ if combined with AI and machine learning techniques, which brings us to the second major trend in the digital transformation of retail banking: AI and machine learning in modelling.
Trend two: AI and machine learning in modelling
Retail banking is one of the industries where the use of artificial intelligence (AI) and machine learning (ML) has become widespread. We have talked about the tremendous potential of data, and we believe that these new techniques are well placed to assist in unleashing this potential, particularly when it comes to credit decisioning.
Machine learning can be applied to strengthen traditional logistic regression credit scoring or a solely ML-based model can be developed for credit scoring. Below we highlight some ML techniques that can be applied by banks when developing their credit scoring systems.
Figure 5 – Common machine learning analytics applied for credit scoring system development
Source: Accuracy
An ML-based model would have several advantages over a logistic regression model. First, it can capture the non-linear nature of risk factors, and thus if trained appropriately, can possess higher predictive power. Second, it is agile and dynamic enough to perform the timely assessment of customer credit quality based on greater amounts of relevant and recent data. Third, the model can be highly automated and self-improving, thereby lowering ongoing operational costs.
Higher predictive power
ML-based models are trained with much more data than their traditional counterparts. These include both traditional data and alternative data as discussed above. While traditional models are not designed to discover complicated relationships between large amounts of data, ML-based models are much stronger in this area. As such, it would not be surprising to see that ML-based models are more predictive than traditional models.
More agile and dynamic
ML-based models are continuously trained with the most up-to-date data, so that they are able to perform real-time assessments of customer creditworthiness. This allows the models to provide rapid feedback to model users for credit approval and other decision-making processes. Due to their agility, ML-based models are also more customer-centric and offer smoother assessments of customer creditworthiness. As a result, greater financial inclusion is possible.
Figure 6 – Risk assessment over time – ML model vs traditional model
Source: Accuracy
Highly automated
ML-based models are designed to be self-improving over time and thus highly automated. Traditional models require users to recalibrate them (e.g. on a yearly basis) and redevelop them (e.g. every few years). ML-based models are able to update themselves based on updated data feeds. As such, operational costs for ML-based models are lower, especially in the long term.
With these benefits, it is no wonder that credit bureaus are aggressively using ML to evaluate large amounts of data and generate improved insights. Equifax, for example, provides its clients with tailor-made services by applying neural networks to an artificial intelligence credit scoring approach. Equifax is not alone in experimenting with ML; Experian boosts its analytics products with ML capabilities to provide richer, more insightful information. Even for ‘credit invisible’ clients with infrequently updated credit files, VantageScore incorporates ML to analyse risks and provide ratings. ML has also proved to be effective in detecting high-risk behaviours and providing more accurate credit scorecards by TransUnion and FICO. A blend of Tree Ensemble Modelling (a machine learning technique employed by FICO) and scorecards significantly improves predictive performance in credit assessment, compared with traditional scorecards.
In addition to traditional credit bureaus, FinTech companies are also actively exploring possibilities in ML to run their businesses. For example, LendingClub, the world’s largest online platform connecting investors and borrowers, has created its credit-scoring algorithm based on ten years of LendingClub data, AI and ML technologies; Kabbage is developing next-generation ML and analytics stacks for credit risk modelling and portfolio analysis; and LendUp, an American online direct lender, employs ML algorithms to identify the top 15% of borrowers who are most likely to pay back their debts.
Limitations
Notwithstanding the advantages of ML-based models, they possess some limitations to be resolved. First, ML-based models are not as trivial as traditional models, and the modelled results can be challenging to interpret. It can also be more challenging to explain the models to regulators and auditors. Second, the performance of ML-based models is highly dependent on the quality of the data used. When feeding huge amounts of data into the models, ensuring the quality of the data can be challenging.
Trend three: Process automation
The third trend is the increasing automation in almost every part of the business. In order to provide fast interactions and personalised customer experiences, automation in know your customer, credit approval, risk management and reporting has become highly important. For example, OppFi, a leading financial technology platform, effectively automates the credit scoring process by using AI models, real-time data analysis and proprietary scoring algorithms. Zest AI, an AI-empowered credit life cycle management organisation, provides banks with its automated services in data processing and documentation as well as compliance validation, deployment and integration. With the help of process automation, banks and FinTech companies are largely improving customer experience and greatly reducing operating costs by cutting loan application processes to a few minutes. Credit scoring is at the heart of credit approval and risk management, and its automation largely relates to the automation of data processing, modelling and validation.
Data processing
With big data analytics, banks use both internal and external data to a great extent. Data collection and data cleansing are the major tasks to be automated. Data collection involves the collection of data from different sources, whether traditional or alternative, as well as its digitisation and standardisation. Data cleansing involves data validity checking, data backfilling, treatments for outliers and doubtful data, etc.
Modelling
A large part of model development can be automated with proper governance and approval processes. For ML-based models, this is more trivial as the models are designed to improve themselves on an ongoing basis using the latest data. For traditional models, automation can be useful for recalibration and the generation of challenger models.
Validation
The validation of models can be entirely automated, whether for traditional or ML-based models. Model validation consists of calculating predefined performance metrics and comparing them with predefined thresholds. It is relatively straightforward to automate such processes and generate validation reports.
Figure 8 – Process automation with the help of ML and AI
Source: Accuracy
What Accuracy does
For clients who need to navigate the digital transformation in the retail banking industry, especially in credit scoring, Accuracy is well placed to work with you on the following tasks:
• Perform an independent review and validation of your existing credit scorecards
• Develop credit scorecards using programming languages including SAS, Python, R, VBA, etc. The development process is semi-automated for easier repetition and maintenance
• Advise you on the adoption of alternative data for credit scorecard development, whether for traditional or machine learning models
• Develop machine-learning-based credit scorecards using open-source languages such as Python
• Perform automation on data processing, modelling and validation
• Perform the overall strategic shaping of retail banking digital transformation and adoption of big data analytics
At Accuracy, our financial services industry experts work with banks and non-bank financial institutions on mergers and acquisitions, strategic transformations, quantitative modelling and adoption of technology solutions. We have been working closely with global financial institutions as well as small and medium-sized institutions over the past two decades.
Appendix 1 – detailed procedures for retail credit scorecards development
This edition of the Economic Brief sees us delve into economic matters at a global level. We will review the latest World Economic Outlook projections prepared by the International Monetary Fund to review the macro impact of the war in Ukraine, and we will move on to consider price dynamics in the three major economic zones: the US, the eurozone and China.
For our fourth edition of the Accuracy Talks Straight, René Pigot discusses about the nuclear industry, before letting Romain Proglio introduce us to H2X-Ecosystems, a start-up that enable both the production and the consumpt ion of hydrogen on-site. We then analyse the development of the retail baking with David Chollet, Nicolas Darbo and Amaury Pouradier Duteil. Sophie Chassat, Philosopher and partner at Wemean, explores the value of work. And finally, we look closer at the long-term discount rate with Philippe Raimbourg, Director of the Ecole de Management de la Sorbonne and Affiliate professor at ESCP Business School, as well as the improvement of the economic panorama with Hervé Goulletquer, our senior economic adviser.
After being weakened by various events and decisions that shed an unfavourable light on it (Fukushima, Flamanville, Fessenheim), the nuclear industry is now enjoying something of a resurgence.
The French president’s recent announcement of a programme to build six EPR2 reactors shows his choice to maintain a base of decarbonised electricity production using nuclear energy.
Though it is a subject of much debate, this decision is born of cold pragmatism: despite their demonstrated large-scale deployment, renewable energies remain subject to the whims of the weather. Alone they will not be able to substitute dispatchable power generation facilities, when considering the ambition behind commitments to reduce greenhouse gas emissions by 2050.
Faced with the electrification of the economy, the decision to maintain nuclear power in the French energy mix alongside renewable energies is not so much an option as a necessity. The guarantor of balance in the French network, RTE, also recognises this: prospective scenarios with no renewal of the nuclear base depend, in terms of supply security, on significant technological and societal advances – a high-stakes gamble to say the least. Beyond these aspects, nuclear power also constitutes an obvious vector of energy independence for Europeans. Current affairs cruelly remind us of this, and the situation could almost have led to a change in the German position, if we look at the latest declarations of their government.
In France, initial estimations put construction costs at €52bn, but financing mechanisms are yet to be defined. The only certainty is that state backing will be essential to guarantee a competitive final price of electricity, given the scale of the investments and the risks weighing on the project. Ultimately, the financial engineering for the project will need to be imaginative in order to align the interests of the state, EDF and the consumers.
Founded in 2018 in Saint Malo, H2X-Ecosystems provides companies and regional authorities with the opportunity to create complete virtuous ecosystems marrying energy production and decarbonised mobility. These ecosystems enable both the production and the consumpt ion of hydrogen on-site. They are co-built with and for local actors to make the most of their regional resources in order to create added value, whilst maintaining it at the local level. In this way, the ecosystems participate in the development of these rural, periurban or urban areas.
Renewable and low-carbon hydrogen is produced from water electrolysis using renewable energy, which has recently become one of the major levers for decarbonisation. H2X-Ecosystems links this production with typical consumers (buses, refuse collectors, etc.) but also and especially with light mobility and delivery services: self-service cars and last mile delivery. Indeed, the company has made available a hybrid car operating on both solar power and hydrogen, thanks to which on-grid recharge stations are no longer necessary. All this comes without noise pollution or greenhouse gases (CO2, NOx, etc.).
More generally, H2X-Ecosystems is present throughout the hydrogen value chain, from production to storage to consumption: electrolyser, high power electro-hydrogen unit, power pack (fuel cells and removable tanks) able to be incorporated in light mobility solutions.
H2X-Ecosystems has signed a partnership agreement with Enedis Bretagne for the deployment of its high-power electro-hydrogen unit designed to provide a temporary power source to the grid during construction work or in the event of an incident. This unit makes it possible to reduce Enedis’s CO2 emissions and noise pollution by replacing its fossil fuel units with this technology.
In addition, during a period of high pressure on energy prices, the value offer put forward by H2X-Ecosystems enables a move towards control of energy expenditure and energy autonomy for industrial sites by relying in particular on this electro-hydrogen generator combined with other complementary systems (renewable energies, on-site hydrogen production, etc.).
In his presentation of the France 2030 plan, French President Emmanuel Macron confirmed the importance of this sector in the future: ‘We are going to invest almost 2 billion euros to develop green hydrogen. This is a battle that we will lead for ecology, for jobs, and for the sovereignty of our country.’
Relying in particular on nuclear power to perform highly decarbonised electrolysis, France has a leading role to play. H2X-Ecosystems is participating to the full by establishing its first production tools in France, whilst reconciling its development with a virtuous ecological approach that will generate added value, energy independence and profitability for companies and regions.
Retail banking, the old guard versus the new
David Chollet Partner, Accuracy
Nicolas Darbo Partner, Accuracy
Amaury Pouradier Duteil Partner, Accuracy
Retail banking is a sector that is set to see its rate of transformation accelerate in the next few years. The past 10 years have seen in particular distribution methods evolve towards more digitalisation, without calling into question the physical model, however. In the 10 years to come, in a world where technology will gradually make it possible to serve major needs via platforms, supply, distribution and technological solutions must all evolve.
1. THE TRANSFORMATIONS AT WORK
It is not worth spending too much time explaining the context in which retail banking has been developing for several years now; suffice it to say that there are three principal challenges: ultra-low rates, regulation that has toughened considerably since 2008 and the arrival of new players.
Beyond this context, the sector is experiencing major technological changes.The first such change regards data. Open banking designates an underlying trend that is pushing banking IT systems to open up and share client data (identity, transaction history, etc.). A new open banking ecosystem is gradually taking shape, in which multiple actors (banks, payment bodies, technology publishers, etc.) share data and incorporate each other’s services in their own interfaces, making it possible to provide new services and to create new tools.
Another major development is banking as a service (BaaS). Historically, retail banking was a fixed-cost industry. The opening up of data, the swing to the cloud and the API-sation of banking systems have made closed and vertically integrated production models redundant. Each of the production building blocks of financial services can now be proposed ‘as a service’. This transformation leads to a swing from a fixed-cost economic model to a variable-cost basis. By outsourcing their banking system, digital challengers can launch their businesses with lower costs and shorter time frames.
Finally, the sector cannot entirely avoid the phenomenon of super-apps, which are gradually changing uses by aggregating services for highly diverging needs. This change may slowly make the way clients are served obsolete and probably requires the development of what we might call ‘embedded finance’.
2. THE FUTURE OF TRADITIONAL PLAYERS
Traditional banks have generally resisted the prevailing winds mentioned above. Over the past 10 years, their revenues have not collapsed, though their growth has proved to be somewhat moderate.
Traditional players still have a certain number of strengths. First, historical banks have complete product ranges, which of course cover daily banking (account, card, packages, etc.), but also the balance sheet side of things, with credit and savings products. Classifying the IT systems of major banks among their strengths may seem rather unconventional. Nevertheless, these large systems, though not agile, are often highly robust, and they have made it possible to shrink the technological gap with neobanks. Finally, traditional players are financially powerful and capable of investing to accelerate a technological plan when necessary.
Naturally, these players have some weaknesses, the main one being the customer experience. However, this point does not relate to the gap with neobanks, which has most often been filled; it relates to the gap with purely technological players for example. When considering the trend of convergence of needs, this weakness may represent something of a handicap for the financial sector as a whole. Another weakness relates to these players’ low margin for manoeuvre in terms of the reduction of headcount or number of agencies, if the implementation of a massive cost-reduction programme proved necessary.
These players are deploying or will have to deploy different types of strategy. First, there are the financial actions, be they concentrating or restructuring. Concentration aims to dispose of all activities away from the bank’s main markets in order to be as large as possible in domestic markets. Restructuring, in Spain in particular but also in France with the business combination between SG and CDN, aims to reduce the break-even point.
Banks should also take other actions. In terms of IT, there will come a time, in the not too distant future, where the lack of agility of historical systems will no longer be compensated by their robustness. Developments will accelerate and the speed of developments will become key.
Finally, traditional players will have to rethink their distribution models in the light of digital technology and the convergence of the service of major types of need, which will enable embedded finance. The idea of embedded finance is to incorporate the subscription of financial products directly into the customer’s consumption or purchase path. The financial service therefore becomes available contextually and digitally.
3. THE FUTURE OF NEOBANKS
Neobanks have developed in successive waves for more than 20 years, and the last wave saw the creation of players developing rapidly and acquiring millions of clients. They are capable of raising colossal funds on the promise of a huge movement of clients towards their model.
The primary strength of neobanks is their technology. Having started from scratch in terms of IT, they have been able to rely on BaaS to develop exactly what they need, all with a good level of customer service.
Moreover, these players generally target precise segments; as a result, they have a perfectly adapted offer and customer path, something that is more difficult for generalist banks.
Their weaknesses are often the corollary of their strengths.
Yes, their limited offer makes it possible to better fulfil certain specific needs, but in a world where technology is enabling the emergence of multi-service platforms, addressing only some of a customer’s financial services needs is not necessarily a good idea. It places neobanks on the periphery of a business line that itself is not best placed in the trend of convergence of needs. But if neobank offers are limited, it is not necessarily by choice.
Developing credit and savings products, areas most often lacking in neobanks, would need them to change size in terms of controls and capital consumption in particular. Finally, the consequence of this limited offer is their inability to capture the most profitable retail banking customers en masse: the customer with multiple accounts. This explains their low revenues, which plateau at €20 per client.
This does not necessarily condemn the future of the neobank. For a start, it is necessary to distinguish between countries based on the availability of banking services. In countries with a low level of banking accessibility, neobanks have an open road before them, like Nubank in Brazil (40 million customers). In countries with a high level of banking accessibility, it is a different story. The low level of revenues and the trend of convergence of major needs will force neobanks to make choices: they can urgently extend their offer to balance sheet products, like Revolut appears to be doing; they can decide to skip the balance sheet step and widen their offer directly to other areas, like Tinkoff is doing in Russia; or they can let themselves be acquired by a traditional player that has an interest in them from a technological perspective – but they should not wait too long to do so.
The retail-banking sector is more than ever under the influence of major transformations. These may be internally generated, like those that touch on data and BaaS, or externally generated, like the development of platforms serving major needs, initially driven by consumer desire for simplification. In this context, traditional players must address two major topics: embedded finance, on the one hand, and potentially the swing towards decidedly more agile systems to stay competitive, on the other. As for neobanks, their offer must be extended to cover balance sheet products urgently, at the risk of losing some agility, or to cover other needs.
But the finance sector as a whole should probably seek to simplify the consumption of its services considerably, faced as it is with non-financial players that have already undertaken this transformation.
Does the value of work still mean something
Sophie Chassat Philosopher, Partner at Wemean
‘When “the practice of one’s profession” cannot be directly linked with the supreme spiritual values of civilisation – and when, conversely, it cannot be experienced subjectively as a simple economic constraint – the individual generally abandons giving it any meaning ’,wrote Max Weber in 1905 at the end of The Protestant Ethic and the Spirit of Capitalism.1 But is this not what we can observe a century later? A world where the value of work seems no longer evident, as if it were ‘endangered’ 2…
Big Quit in the USA, the hashtags #quitmyjob, #nodreamjob or #no_labor, communities with millions of followers like the group Antiwork on the social network Reddit: the signals of a form of revolt, or even disgust with work, are multiplying. This is not just a change to work (as might be suggested by remote working or the end of salaried employment as the only employment model), but a much more profound questioning movement – like a refusal to work. This is a far cry from Chaplin’s claim that the model of work is the model of life itself: ‘To work is to live – and I love living!’3
In Max Weber’s view, work established itself as a structuring value of society when the Reformation was definitively established in Europe and triumphantly exported to the United States. But the sociologist insisted on one thing: the success of this passion for work can only be explained by the spiritual interest that was linked to it. It is because a life dedicated to labour was the most certain sign of being one of God’s chosen that men gave themselves to it with such zeal. When the ethical value of work was no longer religious, it became social, serving as the index of integration in the community and the recognition of individual accomplishment.
And today? What is the spiritual value of work tied to when the paradigm of (over)production and limitless growth is wobbling, and when ‘helicopter money’ has been raining down for long months? Younger generations, who are challenging the evidence of this work value most vehemently, must lead us to elucidate the meaning of work for the 21st century; studies showing that young people are no longer willing to work at any price are multiplying.
The philosopher Simone Weil, who had worked in a factory, believed in a ‘civilisation of work’, in which work would become ‘the highest value, through its relationship with man who does it [and not] through its relationship with what is produced.’ 4 Make of man the measure of work: that is perhaps where we must start so that tomorrow we can link an ethical aspect to work again – the only one to justify its value. ‘The contemporary form of true greatness lies in a civilization founded on the spirituality of work,’ 5 wrote Weil.
____________
1Max Weber, L’Éthique protestante et l’esprit du capitalisme [The Protestant Ethic and the Spirit of Capitalism], Flammarion “Champs Classiques”, 2017. Quote translated from the French: « Dès lors que « l’exercice du métier » ne peut pas être directement mis en relation avec les valeurs spirituelles suprêmes de la civilisation – et que, à l’inverse, il ne peut pas être éprouvé subjectivement comme une simple contrainte économique –, l’individu renonce généralement à lui donner un sens. »
2Dominique Méda, Le Travail ; Une Valeur en voie de disparition ? [Work; an endangered value?], Flammarion “Champs-Essais”, 2010.
3David Robinson Chaplin: His Life and Art, 2013, Penguin Biography.
4David Robinson Chaplin: His Life and Art, 2013, PenguTranslated from the French : « la valeur la plus haute, par son rapport avec l’homme qui l’exécute [et non] par son rapport avec ce qu’il produit. »in Biography.
5Simone Weil, L’Enracinement [The need for Roots], Gallimard, 1949.
The long-term discount rate
Philippe Raimbourg Director of the Ecole de Management de la Sorbonne (Université Panthéon-Sorbonne) Affiliate professor at ESCP Business School
If since Irving Fisher we know that the value of an asset equals the discounted value of the cash flows that it can generate, we also know that the discounting process significantly erodes the value of long-term cash flows and reduces the attractiveness of long-term projects.
THIS RESULT IS THE CONSEQUENCE OF A DUAL PHENOMENON:
• the passage of time, which automatically whittles down the present value of all remote cash flows; • the shape of the yield-to-maturity curve, which generally leads to the use of higher discount rates the further in the future the cash flows are due; indeed, we usually observe that the yield curve increases with the maturity of the cash flow considered.
THE DISCOUNTING PROCESS SIGNIFICANTLY ERODES THE VALUE OF LONG-TERM CASH FLOWS
For this reason, the majority of companies generally invest in short-term and medium-term projects and leave long-term projects to state bodies or bodies close to public authorities.
We will try to explain here the potentially inevitable nature of this observation and under what conditions long-term rates can be less penalising than short-term ones. This will require us to explain the concept of the ‘equilibrium interest rate’ as a first step.
THE EQUILIBRIUM INTEREST RATE
We are only discussing the risk-free rate here, before taking into account any risk premium. In a context of maximising the inter-temporal well-being of economic agents, the equilibrium interest rate is the rate that enables an agent to choose between an investment (i.e. a diminution of his or her immediate well-being resulting from the reduction of his or her consumption at moment 0 in favour of savings authorising the investment) and a future consumption, the fruit of the investment made.
WE CAN EASILY SHOW THAT TWO COMPONENTS DETERMINE THE EQUILIBRIUM INTEREST RATE:
• economic agents’ rate of preference for the present; • a potential wealth effect that is positive when consumption growth is expected.
The rate of preference for the present (or the impatience rate) is an individual parameter whose value can vary considerably from one individual to another. However, from a macroeconomic point of view, this rate is situated in an intergenerational perspective, which leads us to believe that the value of this parameter should be close to zero. Indeed, no argument can justify prioritising one generation over another.
The wealth effect results from economic growth, enabling economic agents to increase their consumption over time. The prospect of increased consumption encourages economic agents to favour the present and to use a discounting factor that is ever higher the further into the future they look.
In parallel to this potential wealth effect, we also understand that the equilibrium interest rate depends on the characteristics and choices of the agents. They may have a strong preference for spreading their consumption over time, or on the contrary, they may not be averse to possible inequality in the inter-temporal distribution of their consumption.
Technically, once the utility function of the consumers is known (or assumed), it is the degree of curvature of this function that will provide us with the consumers’ R coefficient of aversion to the risk of inter-temporal imbalance in their consumption.
If this coefficient equals 1, this means that the consumer will be ready to reduce his or her consumption by one unit at time 0 in view of benefitting from one additional unit of consumption at time 1. A coefficient of 2 would mean that the consumer is ready to reduce his or her consumption by two units at time 0. It is reasonable to think that R lies somewhere between 1 and 2.
From this perspective, in 1928 Ramsey proposed a simple and illuminating formula for the equilibrium interest rate. Using a power function to measure the consumer’s perceived utility, he showed that the wealth effect in the formation of the equilibrium interest rate was equal to the product of the nominal period growth rate of the economy and the consumer coef ficient of aversion R. This leads to the following relationship:
r = δ + gR
where r is the equilibrium interest rate, δ the impatience rate, g the nominal period growth rate of the economy and R the consumer’s coefficient of aversion to the risk of inter-temporal imbalance in his or her consumption.
Assuming a very low value for δ and a value close to the unit for R, we see that the nominal growth rate of the economy constitutes a reference value for the equilibrium interest rate. This equilibrium interest rate, as explained, is the risk-free rate that must be used to value risk-free assets; if we consider risky assets, we must of course add a risk premium.
In the current context, Ramsey’s relationship makes it possible to appreciate the extent of the effects of unconventional policies put in place by central banks, which have given rise to a risk-free rate close to 0% in the financial markets.
THE LONG-TERM DISCOUNT RATE
Now that we have established the notion of the equilibrium interest rate, we can move on to the question of the structure of discount rates based on their term.
We have just seen that the discount rate is determined by the impatience rate of consumers, their coefficient of aversion R and expectations for the growth rate of the economy. If we consider the impatience rate to be negligible and by assuming that the coefficient of aversion remains unchanged over time, this gives a very important role to the economic outlook: the discount rate based on maturity will mainly reflect the expectations of economic agents in terms of the future growth rate.
Therefore, if we expect economic growth at a constant rate g, the yield-to-maturity curve will be flat. If we expect growth acceleration (growth of the growth rate), the rate structure will grow with the maturity. However, if we expect growth to slow down, the structure of the rates will decrease.
We thus perceive the informative function of the yield-to-maturity curve, which makes it possible to inform the observer of the expectations of financial market operators with regard to expectations of the growth rate of the economy.
WE ALSO SEE THAT THE PENALISATION OF THE LONG-TERM CASH FLOWS BY THE DISCOUNTING PROCESS IS NOT INEVITABLE.
When the economic outlook is trending downwards, the rate structure should be decreasing. But we must not necessarily deduce that this form of the yield curve is synonymous with disaster. It can very easily correspond to a return to normal after a period of over-excitation. For example, coming back to the present, if the growth rate of the economy is particularly high because of catch-up effects, marking a significant gap compared with the sustainable growth rate in the long term, the rate structure should be decreasing and the short-term discount rate higher than the discount rate applicable for a longer time frame.
It is only the action of the central banks, which is particularly noticeable on short maturities, that is preventing such a statistical observation today.
When improvement does not necessarily rhyme with simplification
Today, though this statement may apply more to developed countries than to developing countries, the economic landscape appears on the surface more promising. COVID-19 is on the verge of transforming from epidemic into endemic. Economic recovery is considered likely to last, and the delay to growth accumulated during the COVID-19 crisis has mostly been caught up. Last but not least, prices are accelerating.
This last phenomenon is quite spectacular, with a year-on-year change in consumer prices passing in the space of two years (from early 2020 to early 2022) from 1.9% to 7.5% in the United States and from 1.4% to 5.1% in the eurozone. What’s more, this acceleration is proving stronger and lasting longer than the idea we had of the consequences on the price profile of opening up an economy previously hindered by public health measures.
Faced with these dynamics on the dual front of health and the real economy, opinions on the initiatives to be taken by central banks have changed. The capital markets are calling for the rapid normalisation of monetary policies: stopping the increase in the size of balance sheets and then reducing them, as well as returning the reference rates to levels deemed more normal. This, of course, comes with the creation of both upward pressure and distortions in the rate curves, as well as a loss of direction in the equity markets.
At this stage, let’s have a quick look back to see how far we may have to go. During the epidemic crisis, the main Western central banks (the Fed in the US, the ECB in the eurozone, the Bank of Japan and the Bank of England) accepted a remarkable increase in the size of their balance sheets. For these four banks alone, the balance sheet/GDP ratio went from 36% at the beginning of 2020 to 60% at the end of 2021. This is the counterpart to the bonds bought and the liquidity injected in their respective banking systems. At the same time, the reference rates were positioned or maintained as low as possible (based on the economic and financial characteristics of each country or zone): at +0.25% in the US, at -0.50% in the eurozone, at -0.10% in Japan and at +0.10% in the UK. This pair of initiatives served to ensure the most favourable monetary and financial conditions. They ‘supplemented’ the actions taken by the public authorities: often state-backed loans granted to businesses and furlough measures in parallel to significant support to the economy (around 4.5 points of GDP on average for the OECD zone; note, the two types of measure may partly overlap).
Now, let’s try to set out the monetary policy debate. The net rebound of economic growth in 2021, the widely shared feeling that economic activity will continue following an upward trend, and price developments that are struggling to get back into line all contribute to a situation that justifies the beginning of monetary policy normalisation. It goes without saying that the timing and the rhythm of this normalisation depend on conditions specific to each geography.
HOWEVER, WE MUST BE AWARE OF THE SINGULAR NATURE OF THE CURRENT SITUATION.
The current inflationary dynamics are not primarily the reflection of excessively strong demand stumbling over a supply side already at full capacity.
More so, they reflect – and quite considerably – production and distribution apparatuses that cannot operate at an optimal rhythm because of the disorganisation caused by the epidemic and sometimes by the effects brought about by public policies. The return to normal – and if possible quickly – is a necessity, unless we are willing to accept lasting losses of supply capacity. With this in mind, we must be careful not to speed down the road to monetary neutrality; otherwise, we risk a loss of momentum in economic growth and a sharp decline in financial markets, both of which would lead us away from the desired goal.
Another point must be mentioned, even if it is more classic in nature: the acceleration of consumer prices is not without incident on households. It gnaws away at their purchasing power and acts negatively on their confidence, both things that serve to slow down private consumption and therefore economic activity.
THIS IS ANOTHER ELEMENT SUPPORTING THE GRADUAL NORMALISATION OF MONETARY POLICY.
How do the two ‘major’ central banks (the Fed in the US and the ECB in the eurozone) go about charting their course on this path, marked out on the one hand by the impatience of the capital markets and on the other by the need to take account of the singularity of the moment and the dexterity that this singularity requires when conducting monetary policy?
All we can do is observe a certain ‘crab walk’ by the Fed and the ECB. Let’s explain and start with the US central bank.
The key phrase of the communiqué at the end of the recent monetary policy committee of 26 January is without doubt the following: ‘With inflation well above 2 percent and a strong labor market, the Committee expects it will soon be appropriate to raise the target range for the federal funds rate.’ Not surprisingly, the reference rate was raised by 25 percentage points on 16 March, and as there is no forward guidance, the rhythm of the monetary normalisation will be data dependent (based on the image of the economy drawn by the most recently published economic indicators). At first, the focus will be on the price profile; then, the importance of the activity profile will grow.
The market, with its perception of growth and inflation, will be quick to anticipate a rapid pace of policy rate increases. The Fed, having approved the start of the movement, is trying to control its tempo. Not the easiest of tasks!
Let’s move on to the ECB. The market retained two things from the meeting of the Council of Governors on 3 February: risks regarding future inflation developments are on the rise and the possibility of a policy rate increase as early as this year cannot be ruled out.
Of course, the analysis put forward at the time was more balanced, and since then, Christine Lagarde and certain other members of the Council, such as François Villeroy de Galhau, have been working to moderate market expectations that are doubtlessly considered excessive.
We can see it clearly. It will all be a question of timing and good pacing in this incipient period of normalisation. In medio stat virtus1, as Aristotle reminds us. But how establishing it can be difficult!
____________
1Virtue lies in a just middle.
IMPACT OF THE RUSSIAN INVASION OF UKRAINE: NECESSARY DOWNWARD REVISION OF ECONOMIC ASSESSMENT
• The world outside Russia, especially Europe, will not get through the crisis unscathed. The continued acceleration of prices and the fall in confidence are the principal reasons for this. Indeed, the price of crude oil has increased by over 30% (+35 dollars per barrel) since the beginning of military operations, and the price of ‘European’ gas has almost doubled. In the same way, it is impossible to extrapolate the rebound in the PMI indices of many countries in February; they are practically ancient history. Growth will slow down and inflation will become more intense, with the United States suffering less than the eurozone.
• Vigilance (caution) may need to be even greater. This new shock (the scale of which remains unknown) is rattling an economic system that is still in recovery: the epidemic is being followed by a difficult rebalancing of supply and demand, creating an unusual upward trend in prices compared with the past few decades. Is the economic system’s resistance weaker as a result?
• In these conditions, monetary normalisation will be more gradual than anticipated. Central banks should monitor the increase in energy (and also food) prices and focus more on price dynamics excluding these two components – what we call the ‘core’. The most likely assumption is that this core will experience a slower tempo, above all because of less well-orientated demand.
In light of the current context, this month’s edition of the Economic Brief will focus on the relationship between war and the economy. In particular, we will look into links between the two; we will delve into economic theory in relation to war; and we will examine some of the impacts of the ongoing crisis in Ukraine on the world economy.
Accuracy conducted financial due diligence for Boralex in the context of its agreement with Energy Infrastructure Partners to support the implementation of its Strategic Plan in France.
In this first edition of the Economic Brief in 2022, we look into some of the significant factors currently affecting the global economy. We start with COVID-19, its development, and a new mentality taking hold. We then move on to the Purchasing Managers Index to see what it tells us of the level of confidence in economic activity across three major zones. Finally, we take a closer look at inflation and the structural developments that are set to affect prices in the future.
Accuracy supported House of HR with the acquisition of the Dutch company TMI, a company specialised in secondment and recruitment in health care. With the TMI acquisition, House of HR aims to increase its presence in the healthcare sector a market in which the group has long wanted to position itself on a larger scale. The acquisition of TMI is a significant step in realizing the objective for setting up a specialized branch of HR services for health care within the group.
For lawyers involved in the world of litigation and arbitration, claims for damages of all kinds are a common (if not daily) occurrence. However, as the quantification of such damages falls to experts in the fields of accounting, valuation and economics, many legal practitioners can find it difficult to sensecheck points put forward by experts on both sides of the debate.
This article was drafted with exactly this intent in mind: to provide legal practitioners with a straightforward introduction to the main concepts and methods adopted in the assessment of damages.
1. DAMAGES FRAMEWORKS
In common law jurisdictions the theory and principles underlying damages claims are well established. Although the specific application of whether certain types of claims are allowed may differ in civil law jurisdictions such as China, the underlying principles of assessment set out in this article should still apply.
One of the key documents for this discussion is Fuller and Perdue’s classic article, The Reliance Interest in Contract Damages (1937)1, which elucidates 3 key principles of contract damages, namely:
• The Expectation principle;
• The Reliance principle; and
• The Restitution principle.
The Expectation principle holds that damages following breach of contract should put the claimant in the economic position in which they would have been, if the respondent had fulfilled its promise.
The Reliance principle holds that damages for the breach should make the claimant as well off as it would have been had the promisor never made its promise at all.
The Restitution principle holds that damages for breach require the respondent to return any benefit conferred on them by the claimant as a result of the promise.
The first of the 3 principles (Expectation) is doctrinally dominant in discourse about damages and is regularly cited in expert witness reports regarding damages, hence it will form the focus of this article.
One important elaboration of this principle was clarified in the Chorzow Factory case (1928), wherein the Permanent Court of International Justice (PCIJ) established the reparation standard for intentionally wrongful acts under customary international law as follows:
“The essential principle contained in the actual notion of an illegal act… is that the reparation must, as far as possible, wipe out all the consequences of the illegal act and re-establish the situation which would, in all probability, have existed if that act had not been committed.” 2
Practically for the expert, this typically means assessing the economic position of the claimant under two situations: the actual situation, in which the “illegal act” occurred, and a counterfactual (the “But For”) situation in which it did not. Reparation, or damages, would then equal the difference between the two, thereby re-establishing the economic position in the counterfactual situation.
2. TYPES OF DAMAGE SUFFERED
There are several types of damage which can be suffered by the claimant, depending on the exact illegal act involved:
Additional costs
This type of damage is commonly encountered in construction disputes where a contractor has incurred additional costs beyond its tender price due to action (or inaction) by the owner, or conversely, rectification / remedial costs if the subcontractor’s work has been delayed or substandard.
Lost profits
This is one of the most common types of damage claims encountered in commercial disputes. An example would be where a supplier has failed to supply a key, hard-to-source component of a product leading to a loss of sales and therefore profits.
Loss of use of an asset/investment
One potential example would be where the claimant (a manufacturer) loses the ability to produce and sell its products due to damaging acts by the respondent, such as expropriation of or intentional damage to a manufacturing plant.
Loss of opportunity
This type of damage can often be encountered in claims against professional advisors, where for example incorrect advice or inaction by an advisor leads to the claimant losing out on an opportunity to invest in a redevelopment project.
Reputational harm
An example would be in cases where an action or allegation by the respondent has damaged the image or reputation of the claimant’s business, leading to concrete economic loss.
3. COMMON BASES FOR CLAIMING DAMAGES
Two common types of legal dispute where expert assistance is often required to assess the quantum of damages are (i) commercial contract disputes and (ii) investment-treaty disputes under Bilateral Investment Treaties (BIT).
Commercial contract disputes can arise wherever a contracting party has not fulfilled its part of a contract, whether that be in terms of providing certain products or services, or completing agreed work to a certain standard within an agreed duration, etc. Investment treaty disputes, by definition, arise between a state (or State-Owned Entity) and an investor, which can be an individual or a company, often when assets or enterprises operated by said investor are expropriated by the state. The Chorzow factory case referred to in Section 1 could be considered a typical case as it involved a nitrate factory where the factory in question was expropriated by the Polish government from its German owner(s).
Although from a legal standpoint there are many differences between the two types of claim (not least procedural), from an economic valuation perspective the applicable principles are very similar. Hence examples of both types of case will be covered in the sections below.
4. OVERVIEW OF VALUATION APPROACHES
Except in extremely straightforward cases – possibly where a claimant has been deprived of an asset whose value is standardised – most damages claims will involve some kind of valuation process. These claims can be classified as either direct or indirect losses. Direct losses refer to instances where the claimant has suffered the loss of access to or use of an income-generating enterprise or asset, and therefore would include shareholder disputes, divorce cases and certain types of expropriation claims. Indirect losses, as usually defined in contract law, arise from a special circumstance of the case and are only recoverable if the party knew or should have known of the circumstance of the loss when entering into the contract.
Approaches to valuing losses can broadly be divided into three main categories, namely (i) Income, (ii) Market Multiples, and (iii) Cost-based approaches. In very simplistic terms, the income approach values an asset based on the income it will generate; the market multiples approach values an asset by comparing it to other comparable assets or businesses in the market; and the cost-based approach values an asset based on the current replacement or reproduction cost of an asset, whilst taking into account any deductions required for deterioration or obsolescence of the asset.
Income Approach
The income approach is strongly preferred in valuing losses due to its flexibility and wide applicability. In theory, some form of the income approach can be used for valuing any income-generating asset3. This approach converts the expected future economic benefits from the asset – generally, cash flows – into a single, present value. Because this approach bases value on the ability to generate revenue and profits, it would be well-suited to valuing established, profitable businesses, as well as, say, new mining assets where the income streams are well defined. In comparison, it would be more challenging – but not impossible – to reliably apply this method to an early-phase high-tech start-up company as the range of valuations produced would be extremely wide due to uncertainty as to the size and timing of cash flows.
Types of claim in which income approaches could be adopted include:
I. Breach of contract disputes where a management or distribution contract has been terminated. Common situations where these are seen involve long-term hotel management and pharmaceutical distribution contracts;
II. Unfair competition disputes where, for example, a competitor may have diverted business or orders away from the claimant company;
III. Advance Loss of Profit claims where an incident has led to either delayed start-up or interruption of production, such as at a power plant or cement factory.
But For v. Actual
In our experience of disputes, the historical actual situation is generally a matter of factual evidence and agreed upon between the parties (although they may not agree on the ‘forecast’ actual situation). One of the key tasks of the expert is rather to determine what the counterfactual situation would have been (the But For scenario). The loss is effectively the difference between the Actual and But For situations, as shown in the diagram above.
In all the examples listed above, an expert would need to examine the historical books and records of the company, as well as its internal budgets and business plans, and also take into account any relevant industry research as to future trends, in order to form an opinion as to what a reasonable But For scenario would have been. We discuss establishing a reliable But For scenario further below.
The income approach can also take into account common litigation issues such as offsetting mitigation of losses by the claimant, and/or discounts for minority holdings, etc. Once an income approach is decided upon there are two main methods of arriving at a present value, namely Discounted Cash Flows (“DCF”) and capitalised earnings, with DCF being much more common.
One well-known published example of a case where the income approach was adopted is the matter of Suez v. Argentina4, being an investor-state ICSID dispute arising from Argentina’s termination of the concession granted to a consortium of claimants led by Suez S.A. (“Suez”) to provide water distribution and waste water treatment services to the city of Buenos Aires. The Suez consortium had been granted a 30-year concession as part of the privatisation of said services in 1993 and had run the concession relatively smoothly for the first 7 years of the concession. Tensions arose between Suez and the Argentine government during the Argentine financial crisis of 2001 – 2003, leading to the eventual termination of the concession by Argentina in 2006. Suez claimed US$1.09 billion in lost management fees, unpaid dividends and losses on equity investments, all of which were assessed using the income approach.
Although the final award by the ICSID tribunal was substantially less than that originally claimed (US$404.5 million), the tribunal agreed with the use of the income approach by the claimants’ expert.
Market Multiples Approach
The underlying logic of the market multiples approach is that the value of an asset or business should be similar to the value of another comparable asset or business, for example a business of a similar size in the same industry. An easily understood analogy would be that the price of a 10-year-old Ford sedan should be similar to the price of another Ford sedan of the same model and age.
The market multiples approach is often used in cases involving loss of use, loss of opportunity and reputational harm, as it relies on valuing an asset as a whole, rather than the additional costs or loss of profits arising from a specific action.
This approach is typically used (both in damages contexts and in the investment world) in the valuation of the equity of non-listed companies, for which a share price is not directly attainable but prices for similar, listed companies are. For the market approach to be appropriate, it is crucial that there be a sizeable pool of companies which are similar in terms of product and scale to the subject company and for which there are sufficient observable data points regarding their value. However, the market approach can be used for any assets for which there are sufficient comparators with observable prices.
Applied to the valuation of non-listed companies, the expert (i) identifies recent, arm’s length transactions involving comparable public or private businesses, and then (ii) develops pricing multiples which can be applied to the subject company’s normalised earnings or other relevant metrics of value. These pricing multiples can be based either on the market price of comparable listed companies on a stock exchange, or alternatively on real-world transactions involving entire comparable companies or operating units which have been sold.
The advantages to the market multiples approach are that it is simple to understand, widely used in the investment industry, is based on objective, observable third-party data and does not rely on internal business plans which can often be over-optimistic or biased.
The most common limitation to the use of this method is the lack of sufficient comparable companies, particularly in the case of high-tech start-up companies where there may be few, if any, other listed companies offering the same product or service.
An example where this approach was applied was in the ICSID arbitration case of Crystallex International Corp. (“Crystallex”) v. the Republic of Venezuela, in which Crystallex – a Canadian mining company – launched a claim arising from Venezuela’s expropriation of a gold mine being developed by Crystallex in that country. In that case, the tribunal rejected the respondent’s cost-based valuation approach, and strongly preferred the approach of the claimant’s expert, which was a market multiples approach using the gold reserves of the mine as a metric of value, leading to a damages award of US$1.2 billion in Crystallex’s favour.
Cost Approach
The cost approach is based on the assumption that most or all of the value of a company is in its assets. In this method, the expert determines the overall enterprise value by calculating the value (whether that be Book value or Fair Market value) of the company’s assets net of its liabilities.
The advantages of this method are that it is simple to understand, is based on the current situation of the company and is arguably less subjective in that it does not involve projecting the future performance of the company. It can be an appropriate method when valuing holding companies, companies in liquidation or asset-intensive businesses where cash-generating operations tend to contribute less of the overall value. An assessment of sunk/additional costs, which are typically based on the claimant’s historical financial records, would also fall under the cost approach umbrella.
However, for the majority of companies where cash-generating operations do contribute most of the value, it would not be appropriate. Also it does not directly value intangible assets such as brands or Intellectual Property (IP), and so the expert would have to assess that value separately.
An example of an investment dispute where the cost approach was applied is Asian Agricultural Products (“AAP”) v. Sri Lanka. AAP was a Hong Kongbased company which had a minority shareholding in Serendib Sea Foods “Serendib”), which engaged in prawn farming in the eastern region of Sri Lanka. Serendib’s prawn farm sustained severe damage during a major domestic insurrection between a separatist guerrilla group and government forces. Subsequently, AAP alleged that it suffered a total loss of its investment and claimed compensation of US$9 million. In the final award, the tribunal awarded US$460,000, being compensation purely for the tangible assets of the business, as Serendib was loss-making, and had only made two shipments of its single product (prawns) to its target export market (Japan) at the time of the incident.
5. ESTABLISHING A RELIABLE BUT FOR
One of the most important aspects in arriving at a robust calculation of loss under the income method is establishing a reliable But For scenario.
First, the expert must show that he or she has correctly identified the impact of the breach. Many contemporaneous factors can affect a company’s performance in a given year, and failing to correctly identify these other factors can lead to the expert overestimating the loss. For example, imagine a hypothetical dispute where an auto component manufacturer had failed to supply the required components in time to a well-known car manufacturer. However, the breach coincided with depressed demand for cars in multiple countries due to the impact of the Covid-19 virus. In such a case it would clearly be wrong to forecast loss purely based on pre-Covid-19 historical performance as the overall market conditions at the time of the breach were materially different.
Second, the expert needs to demonstrate that rather than unquestioningly accepting the company’s projections at face value, he or she has critically examined historical performance and any forecasts of the company to arrive at an opinion.
This process involves not just looking at the company’s financial statements, management accounts and internal forecasts, but also external evidence such as broker forecasts and market research reports. These external information sources can be very helpful in highlighting areas where the company has been overly optimistic, and pointing out market-wide trends which may have impacted on the company’s performance during the loss period.
It is at this juncture that benchmarking has a large role to play: a company’s forecast sales prices for a commodity can be benchmarked against broker projections, or the entire valuation using the DCF approach can be benchmarked against alternative methodologies such as market multiples or a costbased approach.
6. CONCLUSION
The above is a general layman’s introduction to the main valuation approaches used in assessing damages in litigation cases. The particular valuation method(s) adopted in any given case must be rooted in the type of loss claimed and the facts of the case.
Finally, it should be noted that the methods listed above are not mutually exclusive, and it is common for experts to use a secondary method (e.g. market multiples) as a sense-check for their primary valuation method (often an income approach).
1 Fuller & Perdue, The Reliance Interest in Contract Damages (Pts. 1 & 2), Yale Law Journal Vol 46 No.3 (Jan 1937)
2 Factory at Chorzow (Germany v. Poland), Merits, 1928 Permanent Court of International Justice
3 As opposed to non-income generating assets (often collectibles) such as fine wine, art, jewellery and gold
4 ICSID award in the matter of Suez et al. v. The Argentine Republic dated 9 April 2015, https://www.italaw.com
Our partners Morgan Heavener, Frédéric Loeper, and Darren Mullins authored an Expert Analysis Chapter for the International Comparative Legal Guide – Corporate Investigations 2022. The chapter, New Frontiers in Compliance Due Diligence: Data Analytics and AI-Based Approaches to Reviewing Acquisition Targets, shares their insights regarding the increasing regulatory and practical requirements for conducting compliance-related due diligence and more sophisticated ways to approach such due diligence.
In these
critical and unprecedented circumstances that we are all experiencing, our
priority is to preserve the health of our teams, whilst continuing our activity
with the same exacting standards and high quality that you have come to know.
Some of our
offices in Asia have been affected for several months already and have
demonstrated the resilience of our firm. We put in place the organisation and
IT and communications systems necessary for them to remain perfectly
operational, and these have now been extended to all of our locations
worldwide.
Our work
continues on all our engagements without exception, as we continue to meet your
needs globally. Of course, we remain ready to assist you in making the
decisions relevant and necessary for the current situation, as well as when
normal activity will resume.
We hope
that you and your loved ones stay safe in this exceptional situation, and that
your teams and companies are able to face these challenges in the best of
conditions. We assure you of our unfailing support.
For this last edition of the Economic Brief in 2021, we will take a look back at the year and see how it developed across three different zones that drive the global economy: the United States, China and the eurozone. For each of these zones, we will observe the forecast development, as predicted by the Bloomberg Consensus, of three key elements of their economies: GDP growth, inflation and budget deficit.
Accuracy is pleased to announce that fourteen of its experts have been named among the leading Arbitration Expert Witnesses in the Who’s Who Legal: Arbitration 2022.
Through nominations from peers and clients, the following Accuracy experts have been recognised as the leading names in the field:
Who’s Who Legal identifies the foremost legal practitioners and consulting experts in business law based upon comprehensive, independent research. Entry into their guides is based solely on merit.
Accuracy’s forensic, litigation and arbitration experts combine
technical skills in corporate finance, accounting, financial modelling,
economics and market analysis with many years of forensic and
transaction experience. We participate in different forms of dispute
resolution, including arbitration, litigation and mediation. We also
frequently assist in cases of actual or suspected fraud. Our expert
teams operate on the following basis:
• An in-depth assessment of the situation; •
An approach which values a transparent, detailed and well-argued
presentation of the economic, financial or accounting issues at the
heart of the case; • The work is carried out objectively with the intention to make it easier for the arbitrators to reach a decision; • Clear, robust written expert reports, including concise summaries and detailed backup; • A proven ability to present and defend our conclusions orally.
Our
approach provides for a more comprehensive and richer response to the
numerous challenges of a dispute. Additionally, our team includes delay
and quantum experts, able to assess time related costs and quantify
financial damages related to dispute cases on major construction
projects.
For our third edition of Accuracy Talks Straight, Frédéric Recordon discusses business and economic developments in China, before letting Romain Proglio introduce us to Amiral Technologies, a start-up specialised in disruptive technology. We then analyse the development of the hydrogen industry with Jean-François Partiot and Hervé de Trogoff. Sophie Chassat, philosopher and partner at Wemean, explores Chinese society “as one”. And finally, we look closer at the numbers with Bruno Martinaud, Entrepreneurship Academic Director at Ecole Polytechnique, as well as at the macroeconomic and microeconomic risk in China with Hervé Goulletquer, our senior economic adviser.
GOVERNING A GREAT COUNTRY IS LIKE COOKING A SMALL FISH1
At first glance,
everything seems to be going well in China. The
country has overcome the COVID-19 pandemic, its economy has regained momentum,
and it seems to be entering a new era of prosperity, one that European
businesses present in the country should be able use to good advantage.
However, upon reading the 14th Five-Year Plan (2021–2025), troubling signs of the country starting to
turn in on itself are becoming evident, allowing considerable doubt to
linger over the country’s future growth trajectory.
After 40 years of
modernisation, economic reform and opening up to the world, the Chinese economy
has reached c. USD 10k in GDP per capita, a level similar to that of Japan and
South Korea after equivalent 40-year periods of economic growth in these
countries in the past. However, for the last five
years, Chinese growth has been significantly
running out of steam, and this trend may well continue if the country
chooses insulation over the openness practised since Deng Xiaoping.
The Dual Circulation policy, the core of the 14th Plan, seems to
prioritise the autonomy of the domestic market (internal circulation) over
openness to foreign trade and investment (external circulation), despite the
reassuring words of President Xi during the opening of the China International
Import Expo in Shanghai on 4 November 2020.
The fact that several sectors key to Chinese development, such as the internet, energy and education sectors, have recently been taken in hand and that a growing role is being attributed to public companies – despite their low efficiency and productivity – to the detriment of a highly dynamic private sector are testament to thethinking behindsuch strict economic control. They also demonstrate a major turning point in the recent economic history of the country. President Xi clearly owned this turning point when he stated: “the invisible hand [market forces] and the visible hand [government intervention] must be used correctly. (…) In China, the firm direction of the Party constitutes the fundamental guarantee”.2
The European Chamber of Commerce in China in its 2021/2022 Position
Paper dated 23 September 2021 expressed its concern about an insular withdrawal
and urged the Chinese government to continue its work of reform and opening up
to foreign companies.
The months to come will give an indication of China’s future trajectory. We can hazard that the country will be governed like a small fish is cooked, something President Xi likened to walking carefully across a thin sheet of ice.3
____________
1President Xi Jinping quoting the Book of the Way and its Virtue (Dao De Jing, , 道德经), The Governance of China, p.493
2 President Xi Jinping, speech during the 15th session of the Political Bureau of the XVIII Central Committee of the Communist Party, The Governance of China, p.137
3President Xi Jinping, interview with BRICS correspondents, 19 March 2013
On 21 October 2021, Amiral Technologies announced its first round of fundraising totalling €2.8m. This represents an initial success for the start-up, founded in 2018 in Grenoble on the basis of an observation shared by numerous industrial players: how can we reliably predict breakdowns?
A spin-off of the CNRS, Amiral Technologies
is based on almost 10 years of university research in artificial intelligence
and automation & control theory. The company has successfully developed
disruptive technology: from sensors installed on machines, detecting physical
signals such as electric current, vibrations or humidity, algorithms make it
possible to generate general health indicators for the equipment. These health
indicators are then interpreted by unsupervised machine learning algorithms.
They make it possible to identify the causes of breakdowns most likely to take
place.
Unlike the majority of other solutions on
the market, this solution (named DiagFit), which makes use of machine learning,
does not require the history of breakdowns identified on a piece of equipment
to be able to use artificial intelligence. Indeed, the algorithm is adapted to
a specific use case in order to define a normalised functioning environment for
the equipment.
More precise, quicker, and independent of
the sensors themselves, the technology is already in use with SMEs and
mid-sized businesses, as well as with large industrial groups such as Valéo,
Airbus, Daher, Vinci and Thales.
The predictive maintenance market benefits
from sustained growth dynamics, driven by an industrial base equipped with more
and more sensors, a need to optimise inventories of spare parts and, of course,
a greater need to avoid any costly shutdown in the production chain.
Amiral Technologies now aims to become the top supplier for the European market. The fundraising will enable it to strengthen its technical and commercial team, as well as to accelerate the development of DiagFit and its scientific and technological research.
For some years
now, hydrogen has been presented as the miraculous solution to develop clean
transport and energy storage on a large scale. The combustion of hydrogen,
which produces energy, water and oxygen only, is indeed 100% clean, and we can
certainly glimpse its promising potential. However, the carbon footprint of its
production varies considerably depending on its origin. The hydrogen sector is
not necessarily clean, and it is only decarbonised hydrogen that is stirring up
so much desire.
• Historically, industrial hydrogen – also known as grey hydrogen – has been produced from fossil fuels, and its environmental record is unsatisfactory, or even poor, depending on whether the CO2 emitted during its production is captured and stored. Grey hydrogen is an inevitable by-product of oil refining (desulphurisation of oil) and ammonia production. Today, more than 90% of the hydrogen produced in the world is grey, but this proportion is destined to fall significantly to the benefit of green and blue hydrogen.
• All eyes are now on the production of green hydrogen, that is, the hydrogen produced from decarbonised electricity (solar, wind, nuclear, and hydro power).
• Some researchers are also looking into the exploitation of white hydrogen, that is, hydrogen sourced naturally. As surprising as it may seem, knowledge of the existence and extraction possibilities of this native hydrogen is still rudimentary. For the time being, white hydrogen remains the dream of a few pioneers. Related knowledge is inchoate and accessible volumes unknown. Its research cycle and potential development will be long. If this path were to prove economically viable, it would most likely be explored by large oil producers thanks to their in-situ extraction expertise.
– Finally, big oil and petrochemical groups are calling for a transitory phase using blue hydrogen. Produced using natural gas, it can be considered clean as long as all related CO2 and methane emissions are captured.
For the decades to
come, green and blue hydrogen will be the major areas of development in the
energy industry. But this ambition is confronted with three constraints.
Constraint 1: Demand versus capacity
‘Nothing is more imminent than the impossible’ – Victor Hugo, Les Misérables
Environmental
expectations for the industry seem excessive today, as the requirements for
energy production capacity are of titanic proportions if we are to consider
decarbonising a significant share of the market. As a reminder, global energy
consumption mostly serves industry (29%), ground and air transport (29%) and
residential consumption (21%).
Currently, the
hydrogen sector meets less than 2% of energy needs.
To cover global
energy consumption in 2030, an area the size of France would need to be covered
in photovoltaic solar panels, according to Land Art Generator (US). And that is
assuming that these panels benefit from optimal and constant sunlight and that
they give maximum yield. As observed yields from solar power today stand at
25%, the logical conclusion would mean using an area four times the size of
France to achieve the same goal.
These figures help us to understand why the great powers are now considering reinvesting massively in the nuclear industry and securing their access to uranium deposits across the world. A huge redeployment of nuclear power for electricity generation might make it possible to solve the environmental equation in 50 years (climate change – IPCC objectives). Various significant issues remain to be resolved, of course, including questions of nuclear safety and the treatment and storage of nuclear waste. But given that the time scale to resolve these issues is measured more in the hundreds, if not the thousands of years, rather than in 50, some will quickly weigh up the consequences and decide.
Constraint 2: A development cycle for major projects that cannot be shortened
‘The difference between the possible and the impossible can be found in determination’ – Gandhi
Current
electrolysis processes offer a low energy yield, and the green hydrogen sector
will require the construction of gigafactories, the technology, design and scale-up
of which are not yet fully appreciated.
Despite all
attempts to accelerate the process, we are talking about major projects, and
their development cycles are standardised. There needs to be a 10 to 20 MW
prototype / test site, before any 100 MW sites – currently the target entry
capacity to play in the big league – can be launched.
These large
projects follow the classic cycle in major project engineering as presented
below. If we take as an example the liquefaction process for natural gas, which
is the most similar in terms of engineering and construction complexity to that
of large-scale electrolysis, between five and seven years would be necessary to
go from the feasibility study to the commissioning of the test site.
Cycle d’ingénierie de grands projets
Then, if we say
that feedback from the test site will be provided in parallel to the conception
and feasibility studies of a gigafactory, we would need to consider five to
seven additional years before the gigafactory could begin its operations. It
would be reasonable to imagine that a 100 MW factory would be composed of
independent units, whose installation would be sequential over an additional
period of 12 to 24 months. Based on this plan, we would need to count around 15
years in total to create a gigafactory with an effective production of 100 MW.
To accelerate the
development cycle of these types of project, the following levers could be activated:
• Directly selecting qualified service providers, minimising the tender phase. Based on our experience in similar major projects, the ‘open book’ selection solution makes it possible to reduce the tender offer time, all whilst maintaining effective control over capital expenditure. This lever could facilitate the truncation of the tender phase, potentially winning around 12 months.
• Beginning construction of the prototype and obtaining in parallel the administrative and environmental authorisations for the gigafactory site. This lever would make it possible to reduce the development cycle by a few months.
• Starting up the factory capacity sequentially, segmented in discrete units, and by doing so, advancing the beginning of production by up to a year.
• Launching engineering and construction of the gigafactory in parallel to the prototype and managing the feedback on process optimisation through retrofitting (a rare disruptive approach but efficient).
• Accelerating the engineering and construction cycles by financing a more expensive project and mobilising more resources at a given moment.
For even greater
urgency, more disruptive levers could be applied:
• Removing certain administrative and environmental constraints and the related delays.
• Developing tools (IT and AI), making it possible to accelerate the engineering stage significantly.
• Working on smaller interlinked units able to be serially produced.
In all these cases, we must accept that the costs and risks resulting from the use of an acceleration lever will be higher than those of a traditional development cycle.
Constraint 3: Financial constraint
‘If you have to ask how much it costs, you can’t afford it’ – John Pierpont Morgan
Developing the green
hydrogen sector requires massive and sustained investment. The major powers
have finally understood that fact and acted: more than 30 countries have
announced investments totalling almost 300 billion euros to develop the sector.
However, these
substantial investments still seem insufficient when confronting the carbon
behemoth menacing the planet. Based on the calculations of the Energy
Transition Commission shared in April 2021, 15 trillion dollars must be
invested between 2021 and 2050 to decarbonise the global energy market. That
comes to 50 times more than what has been announced to date.
As Bill Gates said
via his Catalyst initiative from Breakthrough Energy, the scientific, political
and economic worlds have already proved their ability to support innovation in
energy and to give it a favourable development framework. That is what happened
in the past few decades with solar and wind energy and lithium-ion batteries.
But in 2021, we no
longer have the luxury to wait decades. We must collectively make a quantum
leap to accelerate decarbonisation innovation and its implementation. We are
talking about not only investing in proportions that far exceed investments
made in the past but also freeing ourselves of historical financial IRR models.
Here is a short list of some of the actions that may be put in place:
• Sourcing a colossal amount of capital from central banks, countries, financial institutions, great fortunes and philanthropists.
• Also targeting a significant proportion of personal savings (pension funds, mutual investment funds, etc.).
• Enhancing incentives for decarbonisation technologies by implementing systems more powerful than carbon taxes and credits (the effect of which is spot), for example, using specific interest rates based on a project’s future environmental impact.
• Not providing a financial return (IRR) for some of the capital invested. The expected return would become mostly environmental…
– Breakthrough Energy, a non-profit organisation, raised over a billion euros for its Catalyst initiative at the end of September 2021.
• Putting in place environmental reporting that is as reliable as financial reporting.
Investments in decarbonisation are revolutionising finance through their magnitude and the nature of their expected return; this will be environmental, not financial.
‘As one’
Sophie Chassat Philosopher, Partner at Wemean
Culturally, China functions as our opposite: its customs, mental models and rituals are highly compelling to us. To our great benefit, the philosopher François Jullien insists, seeing in Chinese thought a valuable means of decentring ourselves and leaving behind the certainties of Western culture – particularly binarism, a lack of nuance and the constant use of force in the name of logic.1 That in no way means that we must consider this other perspective to be right, but experiencing absolute difference, as the other perspective invites us to do, often allows us to choose new paths for ourselves.
Amongst the most
fascinating elements, there is the way in which Chinese society always seems to
react ‘as one’: collective expression there is unanimous. Of course, the nature
of the political regime and its current toughening stance with regard to the
expression of any form of singularity or standing out from the crowd have much
to do with it. Nevertheless, China has always represented the polar opposite of
individualism and communitarianism, which, in the West, have led to the loss of
a sense of public interest.
To picture this
collective movement in its entirety, we might think of Hobbes’ Leviathan with the famous image on the
frontispiece of the work presenting the body of the king composed of the masses
of individuals from the kingdom, who, if we look more closely, have no faces,
their being fully turned towards the face of the sovereign. This detail reminds
us of the danger of using organic metaphors to talk about societies: they may
well claim to mean that if the parts are there for the whole, the whole is also
there for the parts; however, often the parts end up cowering before the
whole…
A hive or even a murmuration (the natural phenomenon seen with large flocks of birds or schools of fish moving in concert, with each animal seeming to follow some form of choreography laid out in advance, without any individual leading the movement) might also provide, at first glance, images suggestive of the collective movements of which the Chinese are capable. But, of course, we must not linger over such animal analogies; the ethnologist Claude Lévi-Strauss rightly considered them to be the beginning of barbarousness, tantamount as they are to denying the human quality of the other culture.2
Though none of these metaphors depicts a desirable model, the fact remains that this way of functioning ‘as one’ holds up a negative mirror to us: how can we overcome the impasse of the ‘society of individuals’ (Norbert Elias), which characterises a model of Western society where any higher interest seems to have been lost? How can we find something like a collective impulse? What if our individual impulses made us want a collective impulse in the first place?3 Leaving behind individualism does not mean annihilating the individual; it is an invitation to stop looking only at oneself and to move towards shared achievements. Between the West and China, between atomism and holism, a third path is possible.
____________
1François Jullien, A treatise on efficacy (1996).
2Claude Lévi-Strauss, Race and History (1952).
3Sophie Chassat has recently released Élan Vital: Antidote philosophique au vague à l’âme contemporain, Calmann-Lévy editions (October 2021).
Numbers lie
Bruno Martinaud Entrepreneurship Academic Director, Ecole Polytechnique
It’s 2009. Kevin Systrom (soon joined by his co-founder, Mike Krieger) is working on a geolocation social media project, similar to Foursquare. Together, they manage to convince Baseline Ventures and Andressen Horowitz to invest $500,000 in the project. This enables them to dedicate themselves full time to the adventure. A year later, Burbn is launched in the form of an iPhone application that makes it possible to save locations, plan outings, post photos, etc. The application is downloaded massively, but the verdict is not quite what they hope for: the users, beta-testers, don’t like it at all. Too cluttered, too messy, it’s confusing and most of them have stopped using it. A patent failure. All this being very normal, the entrepreneur digests the feedback, learns from the experience and moves on to a new adventure. The metrics are bad – duly noted. And yet Kevin Systrom doesn’t stop there because he notices something that at first glance seems trivial: the photo sharing function (one amongst so many others) seems to be used by a small number of regular users… He investigates, questions these users and realises that the small group loves this function (and only this one). Instagram is born, all from the happy realisation that a small number of people, hidden in the multitudes that didn’t like Burbn, use the app for one reason.
This
story highlights a counter-intuitive principle for the educated manager: numbers
lie in the beginning. Burbn’s metrics were catastrophic. The rational response
would have been to acknowledge that fact and move on to the next project. But a
weak signal was hiding there, showing potential.
The
story of Viagra follows a similar pattern. Pfizer laboratories were developing
a blood pressure regulator, which was in phase III of testing before gaining
market authorisation. If we remember that the development of a new molecule
represents an investment of approximately $1bn, that would mean around $700m to
$800m had already been invested in the project. Pressure was therefore high to
achieve this authorisation as soon as possible. It just so happened that
someone in Pfizer’s teams noticed that some people in the test sample hadn’t
returned the pills that should have been left over as part of the procedure
given to them. Who pays attention to that? Some incoherent data, with no direct
link to the topic (efficiency of the molecule)… A few abnormal results in a
table of 300 columns and 100,000 rows… And yet, by investigating, this person
realised that those who weren’t giving back the extra pills all shared the same
characteristics of age and sex. Pfizer then realised that this blood pressure
regulator had an unexpected side effect so interesting that the project changed
course entirely.
A simple observation lies behind these examples that we can compare endlessly an innovative project, a start-up that is just starting up, they are adventures to be explored.
Exploring first means remembering that you don’t know what works and what doesn’t work in your idea. It’s recognising that you’re facing complex issues, that you don’t quite grasp all the variables of these issues and don’t understand how the variables interact, or their effects.
From
that starting point comes the following consequence, the subject of this
article: you don’t know what to measure and you don’t know the meaning of what you’re
measuring. This goes both ways: what might initially seem like poor metrics, as
in the case of Burbn, can hide a gem. But the opposite is also true. We have
recently worked with a start-up developing a smart object for well-being, aimed
at the public at large. The company quickly sold some tens of thousands of the
product, and based on this success, raised funds to scale up quickly and
control the market, only to find that its sales, far from growing, plateaued
and then fell. It turns out that 30,000 products being sold wasn’t the sign of
massive and rapid market adoption, but the majority of the addressable market.
After a period of trying different things, questioning themselves, doubting and
researching, the start-up’s founders finally found a B2B market, centred on a
service offer based on the smart object. The irony is that its strong early
figures didn’t mean that it had found its market.
These
observations lead to two simple and practical recommendations, which seem
almost trivial when writing them, but they can be slippery in their
application:
1.
Remember that the only way to progress in a complex environment is through
experimentation. Trial and error. Keep what works. Eliminate what doesn’t.
Understanding will come later. Pixar has always applied this empirical approach
to the extreme. From a starting concept, Pixar tests everything. There have
been, throughout the production process, 43,536 variations of Nemo, 69,562 of
Ratatouille and 98,173 of Wall-E… That’s the path between initial idea and
final success.
2. Give yourself the tools to ‘capture’ weak signals, that is, put strategies in place to save what seems irrelevant in one instant but which could be useful later. Remember that at a given moment, in the first life of an innovative project, no one is able to determine what is relevant and what is not.
Unfortunately, the human mind is wired in such a way as to try to give early meaning to the information that comes to it, which leads to neglecting the need to test everything (because we’ve already understood) and to filtering out noise (because we’ve already identified the signal)… These are probably the two deadly sins of the innovator or the start-up entrepreneur.
When we look at the Chinese economy, in this early autumn, two dynamics emerge. First, from a macroeconomic perspective, we can note very disappointing GDP growth during the third quarter of the year. As was forecast by Bloomberg’s economic consensus, one of the most viewed forecast aggregates, performance barely got off the mark (+0.2%, quarter on quarter). This phenomenon may not last long, however, and from the fourth quarter, the country may return to its previous performance level (around 1.5%, quarter on quarter). But even if we accept the forecasts, is there not a risk of being taken by surprise again in the near future?
China: the slumpmay not last
Second, and this time from a microeconomic
perspective, we have the Evergrande issue. It is the country’s largest property
developer, which, over time, has transformed into a type of conglomerate. It is
unable to pay its debts and coupons that are falling due. And it is fair to say
that its debts are high: over 300 billion dollars in total or almost 2% of the
country’s GDP, including 90 billion in financial debt (bank loans and bonds),
150 billion in commercial debt (including deposits from off-plan buyers), and
80 billion off balance sheet (essentially investment products issued by the
company). Available cash would only cover 40% of its short-term debt (maturing
within 12 months). Before starting a carve-out process for some of the assets,
it was estimated that a fire sale would involve a debt haircut of some 50%.
We should note that the Evergrande case, however iconic and high profile the company may be, is not unique. Other developers are putting themselves in defaulting positions, even when they are able to pay what they owe. They use the toughening regulation, which hinders their business development considerably and try to ensure their creditors are the ones with the losing hand. Or, to put it another way, they try to create enough scandal or public difficulty to force the public authorities to revise their attitude.
Evergrande: asset prices clearly falling
A major credit event in a suddenly
deteriorated economic environment gives us a more worrying outlook: what if China
was no longer a centre of stability in a world that very much needs one?
The Xi Administration (based on the name of
President Xi Jinping) has started a restructuring/consolidation phase for the
country’s economy, with the aim of reinforcing its fundamentals. It no doubt
considered the international environment to be favourable to such an action.
The decline of the COVID-19 pandemic, the return of global growth and a
theoretically more cooperative US president should create sufficiently
promising external demand conditions to compensate any ‘blunders’ in domestic
spending that the reforms (even if conceived and implemented well) would doubtless
cause.
However, as is often the case in life,
things have not gone exactly according to plan.
The Beijing government started with three
areas: real estate, debt and inequality. Work on all three must be reduced.
Let us start with real estate. Its total
weight in the Chinese economy, taking into account upstream and downstream
effects, is estimated at between 25% and 30%. The scale is reminiscent of what
we saw in Spain or Ireland before the Great Recession in 2008. Might we do well
to take this similarity as an invitation to prevent rather than to cure, after
the real estate bubble bursts? Moreover, real estate needs have become less
significant (apart from the considerable wave of migration from the countryside
to cities), whilst prices have skyrocketed. An average of 42 m2 per
person in a dwelling is perfectly comparable to what we can see in major
Western European countries. However, the ratio of property prices to average
household income is over 40 in Beijing or Shanghai (2018 figures). Though comparable
to the ratio for Hong Kong, it is significantly higher than its equivalent for
London or Paris (around 20), not to mention New York (12). This level observed
in large Chinese cities is only understandable if economic growth and
demographics remain sufficiently strong to justify a highly dynamic demand for
property and therefore to maintain expectations of property price increases. We
know that the demographics are not heading in this direction, and we sense that
the potential GDP growth is slowing…
Preventing the formation of a real estate bubble could be seen as a pressing obligation. First, is it not necessary to preserve the financial system’s ability to take the initiative at a time of structural change in the economy? The system’s exposure to the real estate sector is significant, between 50% and 60% of total bank loans granted. Second, less investment in real estate would facilitate, all else being equal, increased investment in capital goods or intellectual property products. Measures of both productivity and economic growth could find themselves improved
Credit exposure in real estate sector
Chine: heading towards a new breakdown in fixed investment?
Now let us talk about debt. Debt in
non-financial corporates is high; in fact, it is among the highest in major countries
around the globe. It represents 160% of the country’s GDP. Of course, we can
highlight the much more reasonable levels noted for households and public
authorities and therefore talk about a very ‘presentable’ average. But
embarking on economic reforms, which will most certainly create losers as well
as the expected winners, starting from a situation with a high level of debt in
the corporate sector is uncomfortable. This is even more so the case when we
consider the ricochet effect on the financial system of the difficulties facing
a certain number of companies.
We must therefore understand that the
importance given to greater stability in the financial system risks weighing on
economic growth. As we highlighted previously, this is another reason to ensure
more efficient investment signposting – towards where there is the greatest
potential for long-lasting and inclusive growth.
Debt of non-financial Chinese companies among the highest
The thread that runs from real estate to debt leads to inequalities. These inequalities are too great, and Beijing is aiming to reduce them. The Chinese real estate ‘adventure’ described above, in addition to the development of the technology sector and its consequent outperformance of the market, has contributed to an increase in inequalities, now putting them at the same level as in the United States. The richest 1% holds 30% of the wealth of all households in China, a proportion that doubled in the 20 years from 1995 to 2015. For Beijing, this development seems to carry the risk of challenging political stability. Is it not understandable then that the middle class should call for a reduction in these inequalities?
China: inegality becoming a political matter
No sooner said than done, we might wish to say; after all, President Xi is not one to dawdle. A large number of measures have been implemented to effect this triple ambition. Many relate to the technology and real estate sectors and encourage greater moral standards from the country’s citizens. The table below provides a summary of the changes.
China: a significant catalogue of party/government initiatives
But all this has a destabilising effect!
Ensuring parallelism between the impact of decisions that will suppress growth
(real estate, finance and technology) and the impact of those to come that will
boost it (aim to increase added-value content of the Chinese economy, less
dependence on foreign countries, and ‘healthy’ stimulation of domestic demand,
to mention what we currently understand) will require significant skill in
economic policy. Even in what remains a relatively nationalised system, it will
be quite a challenge. Benefitting from a favourable external environment is
certainly a ‘pressing obligation’ for Beijing today – never mind if, at least
at first, it flies in the face of the ambition to become more autonomous from the
rest of the world. Are we there yet?
Not really – with such a complicated
international environment (from the COVID-19 pandemic, which has not yet disappeared,
to persistent Sino-American tensions, not to mention a global economy that is
still recovering), it will be necessary to arbitrate between the desirable
(domestic reforms) and the possible (degrees of freedom offered by the economic
context and external policies). That will mean accelerating when possible and
slowing down when necessary. It will be an arduous task for the person in
charge of economic policy, not to mention ensuring that the business community
falls in line. It will not always be easy!
A topic much in the news of late is inflation. Indeed, its recent rise is dominating market news, and its effects are being felt globally. This edition of the Economic Brief will see us look into this striking rise in inflation and what patterns might be taking shape. We will also look into how inflation and pay rises interact, as well as how they might affect future employee compensation negotiations.
Accuracy supported the shareholder and management of Baas B.V. – a Dutch player in the construction of energy infrastructure with additional services in the field of fiber optic networks and in-building installations – with the sale of part of its shares to GIMV, a private equity investor with offices in Belgium, the Netherlands, Germany and France. Together with its investment in Verkleij B.V. (made in April this year), GIMV will set-up a strategic national combination active in the design, construction and maintenance of essential infrastructure for energy, water and telecom. The combination will form a strong combination due to the complementary areas of expertise and, multidisciplinary and stable party for the future.
Accuracy conducted financial buy-side due diligence and assistance in financing for Sirail in the context of the acquisition of IGM – Electromechanical & Electronic Systems.
Accuracy conducted financial vendor due diligence for MBO&Co, FINCAP Invest and other investors in the context of the sale of IMMR to Veranex (backed by Summit Partners).
Accuracy, the international independent advisory firm, has promoted four of its directors to partners in its Paris, Montreal and Singapore offices. This brings the total number of Accuracy partners to 56, spread across 13 countries.
Accuracy conducted financial buy-side due diligence for 21 Invest in the context of the acquisition of Edukea Group, an European platform specialized in the training of natural health and well-being professions.
During the summer, economic figures were updated to reflect the latest activity. Of particular note were the figures for July and August, which appear to show the incipient normalisation of the global economy, a trend that is set to continue. In this edition of the Economic Brief, we will look into the reasons behind this normalisation effect. We will also touch on a new development being seen in the labour market.
Accuracy, the global independent advisory firm, has promoted two of its
directors to partners in its Singapore office. This brings Accuracy’s total
number of partners to 56, across 13 countries.
Samuel Widdowson specialises in forensic construction planning and programming in a variety of contexts, whilst Zaheer Minhas specialises in major projects infrastructure advisory across multiple sectors. Their promotions reflect Accuracy’s continued expansion in the Southeast Asia market.
For the second edition of Accuracy Talks Straight, Nicolas Barsalou gives us his point of view on the way out of the crisis, before letting Romain Proglio introduce us to Delfox, a start-up specialising in artificial intelligence. We will then analyse the impact of the crisis on the aeronautics sector with Philippe Delmas, Senior Aerospace & Defence advisor, Christophe Leclerc and Jean-François Partiot. Sophie Chassat, philosopher and partner at Wemean, will invite us to explore the way out of the crisis from a cultural angle. Finally, we will focus on public debt with Jean-Marc Daniel, French economist and Professor at ESCP Business School, as well as on inflationary risk with Hervé Goulletquer, Senior Economic Advisor.
The crisis that we have been experiencing for almost one and half years now has no equivalent in modern history. It is neither a classic cyclical crisis, nor a replica of the great financial crisis of 2008. It would be dangerous to think, therefore, that we are coming out of it in the same way as previous crises.
What are we seeing? Two words enable us to deepen the analysis.
The first is “contrast”. This is, of course, not the first time that an economic crisis has affected some geographies more severely than others, particularly, in this case, Europe more than the Far East. However, it is the first time that we observe such diversity in the impact on different economic sectors. As a result, some affected sectors will take several years to return to their situation in 2019, like air transport or tourism for example. Conversely, other sectors have taken advantage of the crisis, like online activities (e-commerce, streaming services, video games), or have served as “safe investments”, like luxury goods.
The second word is without a doubt “uncertainty”. Given the tense geopolitical context and unprecedented capital injections in the economy, the current bright spell may lead in the relatively short term to another more classic crisis, made all the more dangerous as recent wounds will not have healed.
As advisers to innumerable economic players across the world, we observe an unprecedented de-correlation between certain market situations and the general state of the economy. On the one hand, the mergers and acquisitions market, boosted by an unparalleled level of liquidity, has rarely – if ever – experienced such exuberance both in volumes and in prices, and this was the case well before the crisis emerged. On the other hand, the corporate restructuring market is also very active, carried in particular by bank renegotiations for certain sectors in difficulty.
This paradox exists in appearance only: given the elements mentioned above, it is possible and quite natural to observe these two trends at the same time.
In this context, we think that, now more than ever, financial and economic players should avoid sheeplike behaviour and analyse each situation in an individualised and tailor-made way.
The most interesting cases to consider are certainly those sectors that are experiencing both positive and negative trends. The real estate sector is particularly relevant because it is undergoing profound and long-lasting change, combined with the effects of the last crisis. Let’s look at two representative sub-sectors: retail and office property.
The first has long been affected by the strong and continued development of e-commerce, a phenomenon that accelerated in 2020 under the effects of the lockdown and the closure of numerous shopping centres, to the extent that the value of retail property at the end of last year was at a historic low. Our long-held belief is that this fall in values was excessive, characteristic of the sheep like behaviour mentioned above and not adapted to the modern economy. Centres that are well located, well managed and well equipped will continue to be major players in retail. It is fortunate that, for a few weeks now, others are beginning to realise this and that these property values are rising again.
The second sub-sector benefitted up to the 2020 crisis from a favourable situation, thanks to a structural mismatch between supply and demand and real interest rates at zero that pushed up the so-called “safe investment” values like property. Moreover, the crisis has until now had little impact: for the most part, rent has continued to be paid and, given an extremely accommodating monetary policy, capitalisation rates and therefore values have changed little. But these two parameters are now threatened. The rise of remote working, if it proves to be long-lasting and significant (more than just one or two days a week), will inevitably have considerable consequences on the number of square metres necessary for office space as well as its location. Not all of these impacts will necessarily be negative: though it is certain that large business centres like La Défense and Canary Wharf are suffering and will continue to suffer, central business districts may see their values and occupation rates continue to rise.
As for macroeconomic parameters, and notably inflation, only an oracle could predict how they will develop: the only thing to do is to remain vigilant and to provide the means to minimise fragility through strategies that favour flexibility and agility. In this respect, it will be essential to monitor the development of the banking sector, but that would be a topic for another discussion…
* “THE MYRTLES HAVE FLOWERS THAT SPEAK OF THE STARS AND IT IS FROM MY PAIN THAT THE DAY IS MADE THE DEEPER THE SEA AND THE WHITER THE SAIL AND THE MORE BITTER THE EVIL THE MORE WONDERFUL THE GOOD”
LOUIS ARAGON “THE WAR AND WHAT FOLLOWED” (FROM THE UNFINISHED NOVEL)
Founded in 2018 in Bordeaux, Delfox is an artificial intelligence platform that uses reinforcement learning to model systems able to evolve intelligently, autonomously and intuitively in a constantly changing environment, without human intervention or programming in advance.
The technology developed by Delfox consists in giving objectives to the AI, which must then find a way to achieve them. When it comes to AI, it is essential to understand that this intelligence is based above all on learning.
It is therefore learning mechanisms that lie at the heart of Delfox’s development, which has progressed significantly for over two years in cuttingedge skills like deep learning and reinforcement learning, as well a s the related advanced algorithms.
The goal is to teach a machine to react autonomously, without indicating how to resolve an issue. The machine itself proposes solutions, which will lead to rewards or penalties; it will therefore learn from its mistakes.
For example, teaching a drone to go from point A to point B does not mean telling it to avoid collisions or to accelerate at certain points of the journey; it is about letting it react by itself and rewarding or penalising it based on the solutions it proposes. Potential applications are vast.
There is, of course, the area of satellites, in which Delfox is already working with Ariane Group for space surveillance purposes. Delfox participates in detecting satellite trajectories based on data provided by the GEOTracker space surveillance network to avoid collisions and interference.
But the fields of application are a lot more extensive than just satellite uses: autonomous military and urban drones, cars, logistics, defence, the navy, and more are all potential areas of interest.
Autonomy will no doubt be a key segment of activity in the next decade, and Delfox is already one of the most successful players in the field. With a team of 15 people, Delfox aims to reach €1m in revenues in 2021 and is already working with Ariane Group, Dassault Aviation, Thales and the DGA (French government defence procurement and technology agency).
The aeronautics industry is feeling the heat
Philippe Delmas Senior Advisor – Aerospace & Defence, Accuracy
Air transport is at the top of the list when it comes to sectors most heavily affected by the COVID-19 crisis. Behind it, the entire aeronautics industry is suffering, from manufacturers to equipment suppliers of all sizes. The shock is all the more brutal as annual growth stood on average at 5% over the past 40 years and was forecast to continue at over 4% a year for the decades to come.
In 2020, air traffic fell by 66% compared with 2019, and both the timing and the extent of its recovery remain uncertain. For domestic flights in large countries, recovery will depend on the speed and efficiency of vaccination efforts. It is already strong in the United States (traffic was only 31% lower in March 2021 than in March 2019) and China (+11% higher), but it remains weak in the European Union (63% lower). For international flights, recovery will depend on lockdowns linked to the emergence of new variants and the rate of vaccination in each country, not to mention the confidence that countries will have in each other’s efforts to contain the coronavirus. This recovery is currently very weak. In total, the level of traffic in 2021 will remain much lower than historical levels. At the end of April 2021, the IATA forecast world air traffic at 43% of the level in 2019 (compared with a forecast of 51% in December). Globally, a return to the 2019 level of activity will no doubt have to wait until mid-2022 for domestic flights and 2023, or even 2024, for long-haul flights. Only air freight has experienced continued growth, but it represents less than 10% of all air traffic.
Several factors lead us to consider that air traffic is not yet ready for a return to the long-lasting growth experienced in the decades before the crisis (5% a year from 1980 to 2019), and various arguments reinforce this vision:
– Passengers’ ecological concerns are becoming of prime importance – some will be more reluctant to travel and especially to travel far.
– Large groups have got through the COVID-19 crisis by completely stopping all business travel: short, medium and long haul.
It was an abrupt lesson, with radical conclusions favouring the strict limitation of such travel. As a result, these groups generated significant savings, as well as an improved ecological balance sheet, something monitored by the markets more and more closely. According to the leaders of major European groups surveyed at the end of 2020, business travel may permanently fall by 25% to 40% compared with 2019.
– These two factors are already enough to bring about a significant drop in traffic, but this drop will be compounded by a third factor, an immediate consequence of an airline’s economic model: first class and business class passengers are the major levers of profitability for a long-haul flight. If their traffic is reduced by 25% to 40%, airlines will have no other choice but to increase average prices significantly for all passenger classes.
The impact on prices of the change in behaviour should lead to a new economic balance: a reduction in business class volumes of 30% may lead to an average increase in ticket prices (business and economy) of 15%. With a price/volume elasticity of 0.9, an average fall in economy travel of 13.5% can be expected.
To sum this up, the forecast impact on passenger traffic could be as follows:
– A fall in business class and first class passenger numbers of 30% – A fall in economy class passenger numbers of 13.5% – An increase in average sales prices of 15%.
In our opinion, the sudden turbulence in the industry presents a unique opportunity for it to restructure; its untenable financial situation obliges it to do so. The air transport sector has taken out debt of over $250 billion since the beginning of the pandemic, and its total net debt should exceed its revenues during the course of 2021 or in early 2022. Today, the sector continues to lose tens of billions of dollars in cash each quarter, contributing to the rise in its debt levels.
The industry will be forced to overhaul its model significantly, especially given that this economic constraint doubles up as an ecological constraint that is just as fierce. Indeed, air travel is a substantial emitter of CO2, representing up to 2.5% of emissions globally and around 4% in the European Union. In addition, air travel suaffers another constraint that is specific to the sector, namely that CO2 represents only a fraction of its overall climatic impact. The most recent studies (July 2020) confirm that its emissions of nitric oxide (NO) at high altitudes contribute more to global warming than its emissions of CO2.
In total, air travel alone represents 5–6% of humanity’s impact on the climate. But it is not for lack of trying – the industry has been making substantial efforts. CO2 emissions per passenger kilometre have shrunk by 56% since 1990, one of the best performances of all industries. The total emitted tonnage of CO2 has nevertheless doubled over the same period because of the increase in traffic. Ryanair, the European low-cost leader, summarises the climatic impasse of air transport quite nicely: its aeroplanes are very recent, their occupancy at a maximum (average rate of 95%), but it is the company with the highest CO2 emissions in Europe after nine operators of coal power plants.
Technological progress will continue but, for aeroplanes as we know them, it will not be accelerating. As for truly new technologies (hydrogen, electricity), their time will undoubtedly come, but too late to play a significant role in meeting the object ives o f the Intergovernmental Panel on Climate Change (IPCC) in 2050, that is, limiting global warming to 1.5°C and net carbon emissions to zero.
In this context, the industry must reinvent itself, taking into account the following points:
– Growth in traffic will for a long time remain lower than the growth seen in previous decades. – Progress in energy efficiency will continue but will not accelerate. – This progress should be completed by credible and rapid climatic solutions (i.e. not offsetting), like clean fuel. Boeing and Airbus recently announced, in spring 2021, their desire to accelerate their use of green kerosene quickly and significantly. But the volumes will be insufficient to meet the objectives of the IPCC. – The serious issue of high-altitude emissions – currently left out of the equation – will have to be dealt with. – Owing to and considering the cost of decarbonisation solutions, the cost of air travel will inevitably increase by a significant margin. – This increase will weigh heavily on the most price-sensitive traffic, tourism, whilst technology will clearly and permanently reduce “high contribution” traffic. – Combined with a concerning debt situation, these factors will force a complete overhaul of the economic model of air transport.
Despite this severe assessment, we think that there are ways for the industry to react radically and constructively. We will present some of them soon.
____________
1Boeing and Airbus 2International Air Transport Association (IATA) 3Accuracy interviews with management of large groups 4OECD, INSEE 5IATA
Coming out of a crisis, but what are we heading into?
Sophie Chassat Philosopher, partner at Wemean
The metaphor is a medical one: a crisis is the “critical” moment where everything can change one way or the other – the moment of vitality or the moment of mortality. It would seem, however, that things might not be so clear-cut and that, as Gramsci put it, a crisis instead takes the form of an “interregnum”, “consist[ing] precisely in the fact that the old is dying and the new cannot be born”. What will come out of all this? The suspense… Whatever the answer, it may well come out of left field.
This is what we’re currently feeling: a not very comfortable in-between, and we don’t know where it will lead us. The new world is not coming, and the old world is not coming back, even if, like the characters in Camus’s The Plague, we blithely or even unconsciously take up our old habits again as soon as the storm passes. Yet, at the same time, we know that something has changed, that this crisis has been, in the truest sense, an “experience”, a word whose etymology means “out of peril” (from the Latin ex-periri). Indeed, coming out of a crisis means always coming through and learning a lesson from it. The ordeal inevitably sees us transformed.
But what would be a “good” way to come out of a crisis? A way that would mean coming out on top and not crashing out? For the philosopher Georges Canguilhem, “The measure of health is a certain capacity to overcome organic crises and to establish a new physiological order, different from the old. Health is the luxury of being able to fall ill and recover.”
Overcoming a crisis is inventing a new way of life to adapt to an unprecedented situation. Indeed, health is the ability to create new ways of life, whilst illness can be seen as an inability to innovate. We must also be wary of all the semantics that suggest a return to the same or the simple conclusion of a certain state: “restarting”, “resuming”, “returning to normal”, “lifting lockdown”.
Inventing, creating… that’s what will truly and vitally take us out of the crisis. As another philosopher, Bruno Latour, put it from the very fi rst lockdown, “if we don’t take advantage of this unbelievable situation to change, it’s a waste of a crisis”. That’s why we must also see this period of coming out of a crisis as an occasion to come out of our mental bubbles and leave our prejudices behind. And let’s not forget to question the meaning of our decisions: why do we want to change? What new era do we want to head into, knowing that other crises are waiting for us? The thicker the fog, the stronger and further our headlights must shine.
____________
1“The crisis consists precisely in the fact that the old is dying and the new cannot be born; in this interregnum a great variety of morbid symptoms appear.” Antonio Gramsci, Prison notebooks (written between 1929 and 1935).
2“For the moment he wished to behave like all those others around him, who believed, or made believe, that plague can come and go without changing anything in men’s hearts.” Albert Camus, The Plague (1947).
3Georges Canguilhem, “On the Normal and the Pathological”, in. Knowledge of Life (2008).
4Le Grand Entretien, France Inter, 3 April 2020.
Considerations on public debt
Jean-Marc Daniel French economist, Professor at ESCP Business School
By replacing corporate debt, the economic support policies linked to COVID-19 have sent public debt levels through the roof globally. According to the IMF, global public debt should increase from 83% of GDP at the end of 2019 to 100% at the end of 2021. At that time, this ratio is expected to reach 119% in France, 158% in Italy and… 264% in Japan. Yet, many of the comments brought about by this explosion are absurd.
FOUR MISCONCEPTIONS ARE OFTEN SPREAD ABOUT PUBLIC DEBT.
The first is that it constitutes a burden that one generation transfers to the next. However, as early as the 18th century, Jean-François Melon demonstrated the approximative nature of such a claim. Melon, the secretary of the famous John Law at the time when the latter was propounding his public debt monetisation policy, sought to justify himself after the policy’s failure. He gave his view on what happened in his Essai politique sur le commerce (Political essay on trade) where he declared:
“THROUGH PUBLIC DEBT, THE COUNTRY IS LENDING TO ITSELF.”
He insists on the fact that public debt does not effect a transfer from one generation to another but rather from one social group, taxpayers, to another, the holders of public securities, who receive the interest.
The second misconception is that the repayment of debt presents a threat to public finances. Some therefore suggest issuing perpetual debt, so that it will never have to be repaid. However, it just so happens that, in practice, public debt is already perpetual. Indeed, governments do little more than pay interest. Since the beginning of the 19th century, no entry has been made in a government’s budget for the repayment of its debt. Each time a loan comes to maturity, it is immediately replaced.
The third misconception about public debt is that a precipitous rise in interest rates would constitute a threat; after all, the government’s concrete and formal commitment is to pay interest. The increasing scarcity of potential lenders would generate this rise in rates and would restrict the opportunities for governments to borrow. However, every modern economy has a central bank acting as lender as a last resort. As a result, banks have no problem buying debt that they can subsequently dispose of by selling it back to central banks – and they do so without limit. The effective interest rate and the amount of debt held by private players ultimately depend on the action of the central bank. Incidentally, the status of the US central bank, the Federal Reserve, is explicitly defined in its mission:
“Maintain long run growth of the monetary and credit aggregates commensurate with the economy’s long run potential to increase production, so as to promote effectively the goals of maximum employment, stable prices, and moderate long-term interest rates.”
Though independent, central banks now maintain very low rates with the clear aim of alleviating the cost of interest for governments. In addition, as the central bank transfers back to the government the debt interest that the latter pays to the former, the portion of public debt owned by the central bank is free, which systematically reduces the average interest rate paid by the government. The situation in Japan presents an illustrative example of this. According to the OECD, its public debt/GDP ratio stood at 226% in 2019. The Japanese government quite calmly considers that this ratio will reach 600% in 2060. Its insouciance can be attributed to the fact that its net interest costs amounted to almost zero in 2019, thanks to an ultra-accommodating monetary policy and half of public debt being owned by the country’s central bank.
Finally, the fourth misconception is that there would be a division between good debt and bad debt.
Good public debt would finance investment; bad public debt would finance operations. This division makes little sense: it is based on taking the thinking behind private debt and applying it to public debt. It assumes that public investment spending prepares for the future, whilst public operational spending sacrifices the future for the present. However, it is easy to see that the salary of a researcher, whose work will lead to technical progress and therefore more growth, is operational spending, whilst the construction of a road leading nowhere corresponds to investment spending…
Nevertheless, the idea of good and bad debt should be detailed further because, in certain conditions, it should guide fiscal policy. Incidentally, our ancestors had identified the problem.
For a long time, religious authorities considered that remunerating a loan was tantamount to usury.
Their reasoning became more refined over time, to the extent that in the 13th century, Saint Thomas Aquinas could write:
“He who lends money transfers the ownership of the money to the borrower. Hence the borrower holds the money at his own risk and is bound to pay it all back: wherefore the lender must not exact more. On the other hand he that entrusts his money to a merchant or craftsman so as to form a kind of society, does not transfer the ownership of his money to them, for it remains his, so that at his risk the merchant speculates with it, or the craftsman uses it for his craft, and consequently he may lawfully demand as something belonging to him, part of the profits derived from his money.”
The nascent political economy then distinguished between two types of loan: on the one hand, there were “commercial” loans, also known as “production loans”, which financed investments and the emergence of future wealth, creating something on which to pay interest; on the other hand, there were loans aimed at helping those in difficulty, called “consumer loans”, which follow the same line of thinking as donations and should therefore be free.
The modern materialisation of Saint Thomas Aquinas reflections leads to the following affirmation: private debt is justified when financing investment that brings a structural improvement to growth, whilst public debt is justified in response to cyclical hazards, ensuring collective solidarity with economic sectors in difficulty due to cyclical fluctuations.
European treaties are based on these principles, the “Treaty on stability, coordination and governance in particular.”
THIS TREATY STIPULATES:
The budgetary position of the general government of a Contracting Party shall be balanced or in surplus; [this] rule shall be deemed to be respected if the annual structural balance of the general government [falls within] a lower limit of a structural deficit of 0,5 % of the gross domestic product at market prices.
It confirms the distinction between a “good deficit” – the circumstantial deficit, which appears when growth is struggling and disappears when growth is sustained – and a “bad deficit” – the structural deficit, which is independent of the cycle and remains no matter the circumstances.
What is worrying today is that we are moving away from this scheme, which is not without negative consequences. The first of these consequences relates to equality between supply and demand. Any public expenditure that is not financed by a tax on private spending increases demand. If this increase lasts, it will lead to one of two situations: an external contribution, that is, a deepening trade deficit, or the opportunity for the production system to increase its prices, that is, a boost to inflation.
The second negative consequence relates to an increase in public debt generating negative expectations for private players.
First, the instinct to save in order to prepare for an uncertain financial future brought about by the accumulation of debt leads to an increase in asset prices – property bubbles might be the most obvious materialisation of this phenomenon. This is what economists call “Ricardian equivalence”.
Second, these negative expectations erode the credibility of the currency.
Countries (like Lebanon) that see their currencies disappear in favour of the dollar because of a surge in public debt are rare. Nevertheless, we are witnessing a resurgence of gold, which remains the ultimate monetary recourse in the collective unconscious, a resurgence underlined by the soaring price of this precious metal.
All this to say that it is time to put an end to the “no matter the cost”, even if the cessation of payments of the government is not on the agenda.
Let’s remember the time before the pandemic. Prices are reasonable. From the beginning of 2010 to the beginning of 2020, the average annual increase in consumer price indices, when we exclude particularly volatile items like energy and food products, reaches 1.8% in the United States and 1.1% in the eurozone. The 2% objective set by central banks is not met and even the very low rate of unemployment (at the beginning of last year, it was 3.5% in the US and 5% in Germany) seems unable to generate an acceleration, via more dynamic labour costs.
Labour market developments – deregulation and a decrease in the bargaining power of employees – may explain the majority of this result. A collective preference for saving over investment and the credibility of monetary policies are other explanations that can be put forward.
But it’s only after a COVID-19 crisis that has lasted almost a year and a half and a way out that is finally taking shape, at least in the US and Europe, that the price landscape seems to have been thrown upside down! In two months (April and May), this very same core of prices increases by 1.6% in the United States (a 10% annual rate!) and 0.7% in the eurozone (an annual rate of over 4%). Just what is going on? This price acceleration comes as somewhat of a (bad) surprise, particularly because the objective of economic policy, throughout the pandemic, has been to maintain productive capacities (companies and employees), so that activity can restart ‘like before’ when the public health conditions allow it.
So, in terms of prices, things may not be happening exactly as expected. What explanations can we give? Let’s start with three.
First, the reopening of an economy more or less “preserved” over a fairly long period requires rebalancing. Starting production again is not instantaneous, and demand during lockdown is not the same as demand during unlockdown. For supply, a raw materials index, like the S&P GSCI, increases by 65% over one year (and even 130% compared with the low point in April 2020). Similarly, the cost of sea freight increases over one year by more than 150%. As for demand, during this interim period between one economic state and another, two mechanisms of upward price distortion coexist. The goods or services that turned out to be the winners of the lockdown have still not relinquished their crowns; their prices remain dynamic. Those that were the losers can now “pick themselves back up”, or rather pick their prices back up! The two graphs below illustrate what is happening in the US.
Based on this two fold observation and at this stage of analysis, an initial conclusion emerges: the price acceleration phenomenon may very well prove temporary, as the central bankers keep telling us. The production circuit will get back up to “cruising speed”, and the concomitance of these two movements in the rise of certain retail prices is not expected to last.
US: price winners from unlockdown (4% of index)
US: price winners from lockdown (12% of index)
We must remember the mechanisms that are at the heart of forming consumer prices. There are three key points in the matter.
1. Transmission losses between the raw product prices and consumer prices are very significant, so much so that in the American case the correlation between the two series is only 10%.
2. The profile of labour costs, and especially those per unit of output (the former from which the evolution of labour productivity is subtracted), shapes, with a delay of a few quarters, the profile of consumer prices. The messages sent by the front end of this relationship are not worrying. Unemployment is still far from its pre-COVID-19 level and businesses are putting a lot of emphasis on the need to improve their efficiency.
3. Inflation expectations play a significant role in the formation of prices. Indeed, the stability of expectations is the guarantor of the stability of prices. The reasoning behind this is as follows: if all consumers start to believe that prices will accelerate, they will together precipitate purchasing decisions. The imbalance, which is most often inevitable, between a sudden increase in demand and an offer that struggles to adapt quickly leads to the phenomenon of price acceleration. This phenomenon will escalate and become permanent if labour costs follow prices. It would then be justified to talk about inflation. Let’s say that, for the time being at least, expectations have done quite well in resisting the “fuss” generated by these somewhat sharp increases in consumer prices.
China: transmission losses between production price index (PPI) and core consumer price index (CPI)
US: key role of unit labour costs in the formation of retail prices
To conclude on this second analytical point, the risk of “cyclical” inflation seems rather limited at the moment.
Finally, despite the explicit wish and will to return to normal once the pandemic is behind us, shouldn’t we question the changes that it has brought about? Let’s ask three questions:
1. How can we eliminate the divergences generated by the health crisis (countries, sectors, companies and households, employment and savings)?
2. What will be the effect of the rise in debt (public and private)?
3. How can we normalise an economic policy that is so highly accommodating?
It is precisely because these questions exist that the resolve behind current economic policy is both remaining and transforming. The best illustration of the approach can be found in the United States in the High Pressure Economy. Its ambition is threefold: to prevent a decline in potential growth, to reorientate the economy towards the future (digital, environment and education/training) and to galvanise both supply and demand. This requires an increase in public demand and an increase in transfers, with the idea that private spending will follow. At the same time, it is also necessary to ensure that sectoral and structural policies contribute to the corresponding supply side changes, higher productivity gains and more jobs, all while avoiding excessive timing differences between the respective upward shifts in demand and supply. Otherwise, there would be a risk of creating less reasonable price conditions. Further, there is no point trying to hide it: there is an element of “creative destruction”’ in the approach taken.
THREE DEVELOPMENTS ARE STARTING TO APPEAR.
1. The questioning of the triptych – movements (goods and people) / concentration (locations of production and possibly companies) / hyperconsumption – because of the constraints of sustainable development
2. The rebuilding of productive supply (air transport, tourism, automotive, etc.)
3. The matching of labour supply and demand with both labour shortages and excesses.
We have to admit that we are not facing a classic, cyclical sequence. Adjusting economic policy may not be appropriate (stimulus either poorly calibrated or ill-suited), and structural and sectoral changes may generate imbalances at the macroeconomic level; price acceleration would be an indicator of this. Of course, so far, this is all conjecture, but we have a duty to remain vigilant.
LET’S LOOK AT THE THREE CONCLUSIONS THAT WE HAVE REACHED:
The temporary is not made to last ; the cyclical sequences are not sending any particularly worrying messages in terms of prices today or in the near future; the mix, formed through economic policy initiatives and structural changes currently being set in motion, should be closely monitored because it could be a source of imbalances, including greater inflation. A certain historical reference may be worth considering: the years following the end of World War II. Indeed, this period had a need both to support the economy and to reabsorb the imbalance between an awakening civilian demand and a then very military supply. All of this forced structural and sectoral developments. But beware: even if there is a certain resonance in terms of the sequences, the issue of time is perceived differently. It was necessary to move very quickly 75 years ago, but many believe, rightly or wrongly, that time pressure is less intense today. As such, neither policy initiatives nor structural changes would be of such a magnitude and speed to generate serious imbalances, including the likes of more inflation.
Our partner Anthony Theau Laurent and director Edmond Richards were featured on the sixth edition of “The investment treaty arbitration review” by The Law Reviews, sharing their insights on Causation.
This edition of the Economic Brief will see us focus on economic growth. More specifically, we will examine the economic growth lost during the COVID-19 crisis and the time lag in catching up to where we should be, contrasting the situation in China with that in the United States, Europe and elsewhere.
Accuracy conducted financial buy-side due diligence and provided assistance with the SPA and completion accounts for Schneider Electric in the context of the acquisition of a controlling stake in ETAP Automation.
Accuracy provided financial due diligence support to House of HR on the acquisition of Cohedron, a leading group of full-service companies in the public sector. The takeover enables House of HR to strengthen their position on the Dutch market and in the public service sector.
In this edition of the Economic Brief, we will examine some of the factors behind the increase in corporate leverage. We look at how corporate debt can amplify the effects of a financial shock and go on to investigate what the price may be for the current high levels of debt.
Accuracy conducted buy-side financial due diligence for Amundi PEF and a consortium of funds in the context of their acquisition of a stake in The Reefer Group.
In this month’s Economic Brief, we delve into changes in mobility and economic confidence. We look at how different countries are opening up at different rates and analyse confidence in both the services sector and the manufacturing sector. Finally, we look into recent comments comparing the economic situation in the 2020s with the 1920s to discover if there is some truth to them.
Accuracy assisted NewPort Capital with its investment in Amslod, a a fast-growing and leading direct-to-consumer Dutch e-bike brand. The investment of NewPort Capital will enable Amslod to accelerate its growth strategy and strengthen its position in the Dutch and European E-bike sector.
This month in the Economic Brief, we look a little more closely at the impact of the pandemic in the developed world. We examine the different rates of vaccination in the West and consider how economic confidence has been affected. We go on to consider the impact of the crisis on debt servicing for companies before touching briefly on inflation.
Open Banking is changing the face of the Canadian financial services industry
Consumers increasingly expect online banking services to rival the services they can access in branch. Open Banking or Consumer-Directed Finance – which officially recognizes consumers’ ownership over their financial data – has the potential to drive innovations in financial technology that will vastly improve the customer experience. Open Banking policy is driven by, and beneficial to, consumers, but it has the potential to seriously disrupt the financial services industry. As fintechs and tech giants begin entering the market and developing products and services based on Open Banking APIs, the Canadian banking oligopoly will begin to erode.
Absent immediate and decisive action by incumbents, there is a significant risk that traditional financial institutions may eventually be reduced to commoditized providers of financial services, competing with other incumbents solely on price in a race to the bottom. For example, Open Banking enabled applications could drastically reduce the need to shop around for mortgages and loans; consumers could simply share their credit information with a third-party app that provides them with a list of quotes from major financial institutions from which they could simply select the lowest rate. Despite posing a threat to incumbents, fintechs can also serve as allies, helping them better serve their customers while defending against the more serious competitive threat posed by the tech giants.
Tech giants are capitalizing on the shift to consumer-directed finance to penetrate the financial services market
Worryingly, most Canadian financial institutions lack the infrastructure and / or the expertise needed to compete effectively with tech giants like Amazon and Facebook, who have both the technological and financial resources to displace incumbents. Google has recently entered the consumer banking space by allowing consumers to open checking accounts and transfer money digitally through the Google Pay app and Apple partnered with Goldman Sachs in 2019 to launch the Apple credit card. In China, an early adopter of Open Banking, tech giants like Tencent and Alibaba have already begun to dominate the financial services market with integrated platforms such as WeChat Pay and AliPay.
Chinese Mobile Payments Market Share by Transaction ValueSource: Bloomberg, iResearch data as of June 30, 2020
In addition to looming competition from Western tech giants, fintechs also threaten to disrupt the industry by chipping away at the consumer-facing link in the financial services value chain by providing innovative services built on Open Banking APIs (including account aggregation, robo-advisory, automated accounting etc.). This shift is particularly problematic in an era of near-zero interest rates putting pressure on lenders’ bottom lines. While fintechs may appear to threaten incumbents, they can also serve as perfect partners for financial institutions looking to remain competitive in the face of potential competition from tech giants. By providing fintechs with access to an API ecosystem and creating mutually beneficial partnerships, financial institutions can leverage their expertise and agility to bring innovative products and services to market faster. Speed to market is key because Open Banking adoption, as well as the competitive pressures that come with it, is intensifying and incumbents should act quickly.
Growth of Financial Services APIs and Fintech Deal Volume
The Canadian Open Banking ecosystem is still in its infancy, incumbents should learn from experiences abroad to help prioritize the most promising use cases
The secular digitization trend in the financial services industry was accelerated by the COVID-19 pandemic, as well as the ensuing lockdowns, which forced consumers to increasingly rely on digital banking as bank branches around the world were shuttered. According to a recent report by TrueLayer, a UK-based Open Banking data aggregator, use of their Payments API grew 832% between March and July 2020. Further, the average transaction value of those payments more than doubled since last year, while usage rates have yet to fall off. The same report analyzed millions of API calls and found that, as of 2020, PFM (personal finance management) was by far the most popular Open Banking use case, representing nearly a quarter of all API calls. PFM includes applications such as account aggregation, smart budgeting and auto-saving. PFM applications are likely the most popular use case because they’re the easiest to implement, but a there are number of emerging use cases in proptech, insurtech and regtech that promise to further revolutionize consumer banking.
Open Banking API Calls by Use Case – Europe (% of API Calls)
Source: TrueLayer
Open Banking not only makes retail banking more convenient for consumers, it also provides SMEs with a previously inaccessible suite of time saving digital tools that help them stay competitive. The TrueLayer report shows that nearly 10% of all API calls for related to automated accounting applications. By automating accounting and other back-office tasks, entrepreneurs can spend more time on their businesses and less money on professional services.
Canadian incumbents have begun preparing for the shift towards consumer-directed finance by forging partnerships with leading fintechs
A small group of large financial institutions, commonly referred to as the Big 6, has long dominated the Canadian financial services value chain. Other industry players including insurance companies, wealth management firms, credit unions and payment processors have successfully competed with the Big 6 in certain verticals, but none have materially disrupted the industry across the value chain. While the oligopolistic nature of the industry has helped Canadian banks outperform their European and American peers, consumers have gotten the short-end of the stick. Open Banking promises to change this by opening up bank data and processes to third parties who can leverage it to create innovative products and services. Most incumbents lack the expertise, technological infrastructure and operational agility to build competitive products and services internally. As such, most incumbents are forced to chose between acquisitions and partnerships when implementing Open Banking use cases, with the latter being the preferred choice for most. Canadian industry leaders have already established partnerships with a number promising fintechs across the value chain in anticipation of the looming paradigm shift.
Canadian Fintech Partnerships
Source: Luge Capital, Company Websites
In 2019, Desjardins announced a partnership with Hardbacon, a personal financial management and account aggregation application, in order to drive traffic to Desjardins’ online brokerage service. In 2017, RBC collaborated with Wave to provide their SME clients with a suite of accounting, invoicing and financial management tools. These mutually beneficial partnerships help incumbents by allowing them to offer improve their service offering and stay competitive without substantial investment. While the existing incumbent-fintech partnerships are effective, they are far from sufficient. Customers’ needs and expectations are constantly evolving and incumbents must continuously innovate to ensure they can keep up with the rapid pace of change. If they don’t, the tech giants will.
Continuing the theme from the last edition of the Economic Brief leads us to consider how countries are dealing with the economic impact of COVID-19 and what actions they are taking to overcome the crisis. As can be predicted, the traditional rich–poor dichotomy is alive and well.
That is the conclusion
of a French study led by the Ministry of Labour during the first lockdown, an
unprecedented period during which the use of remote working became the norm
overnight for many sectors of activity.
One year on from the
birth of this revolution, and thanks to the feedback of over 450 colleagues, I
would like to share with you some invaluable lessons learned.
A setting conducive
to concentration
Despite a certain
feeling of distrust and thanks to our extraordinary capacity to adapt, remote
working has proved itself. It facilitates, when the home allows it, a setting that
favours the concentration necessary to perform certain tasks like drafting
reports, for example. It also proves to be efficient in the following concrete
cases: short interactions with colleagues; presentations of simple documents;
and meetings with a limited number of participants, well-prepared content and a
predictable flow.
An obstacle to
learning and creativity
Remote working imposes
a certain distance, however, no matter what technological tools are chosen or
how frequently they are used. This distance slows down the smooth running of a
quality learning process. Indeed, such a process can only take place in direct
contact with the realities of the job. The apprentice has to be able to
observe, question and understand best practices to be able to get to grips with
them. Remote working also diminishes creativity by depriving us of the precious
interactions that take place outside of the nitty-gritty of the job. It is
these interactions that make up the life of an office, a team, a company. An
unexpected comment here, a nod or shake of the head there, an encouraging look
… so many exchanges that make it possible to call something into question or
to be audacious and which allow us to innovate together.
Group erosion
Ultimately, if a
company is understood as simply the sum of its parts – or rather the sum of
isolated individuals – it is meaningless. Remote working deprives us of this
key aspect of the group, of this shared project nourished every day by our
interactions, our agreements and disagreements: conviviality, giving us a sense
of belonging and being useful.
The food for thought on this topic is vast and, as the health crisis continues to affect us and obliges us to adapt once more, I invite you to share with us your feelings on this new way of working.
On 21 January 2021, whilst visiting the Centre for Nanoscience and Nanotechnologies (C2N-CNRS) on the Plateau de Saclay (the European Silicon Valley), President Emmanuel Macron unveiled his ambitious Quantum Plan. The aim of this plan, which relies on France’s excellent research credentials, is to close the country’s gap in terms of investment.
It must therefore
promote work and research on computers, sensors, calculators and even
cryptography. In total, almost 1.8 billion euros will be dedicated to this
five-year plan.
The plan ‘is a
plan for the whole ecosystem’, the French president also announced, proof that
technologies will emerge on the market in particular through certain start-ups
in the quantum technology sphere.
One of the most
promising, Quandela, is one of the first companies in the world to
commercialise photonic qubit emitters in the form of single photons. This first
technological building block is essential for the creation of future quantum
calculators.
Created in 2017
by Pascale Senellart (CNRS research director), Valérian Giesz and Niccolo
Somaschi, Quandela is a spin-off of the C2N-CNRS. The team’s objective, based
on this light pulse technology, is to improve the calculating speed of research
computers and ultimately to build the first quantum computers.
The possibilities
offered by such a development are immeasurable, from the potential discovery of
new medication thanks to simulations of molecular interactions to applications
in aeronautics or banking by enabling virtually infinite data and risk
analysis.
Quandela is at
the heart of the quantum revolution and is approaching the next step in its
growth thanks to fundraising realised in July 2020 with Quantonation (the first
venture capital fund dedicated to quantum technologies and innovative physics)
and Bpifrance (via the French Tech Seed fund). This fundraising will in
particular make it possible to accelerate the commercial deployment of the next
generation of products.
Quandela has been supported for some months by La Place Stratégique – an organisation sponsored by the French state (Ministère des Armées, Direction générale de l’armement, Agence de l’innovation de défense, Gendarmerie Nationale), large corporates (Thales, Arquus) and the firms Accuracy and Jeantet – avocats – whose role is to assist the young companies that will count in tomorrow’s world.
Customisation and personalisation in the beauty sector
The personalisation of beauty is
much more than just simple marketing innovation
Marketing and innovation have always been key success factors in beauty
and personal care companies. This is even more so the case today in an
environment where the consumer has access to a much broader offer and greater information
thanks to the internet.
Historically, marketing and innovation cycles were mostly product-centric,
focusing on the continuous improvement and upgrading of product ranges and
brands. However, this marketing routine has been brutally disrupted by new
growing consumer expectations. Indeed, marketing and innovation have now become
customer-centric to feed the need for natural products on the one hand and more
personalised products on the other.
We know that growing concerns for the environment and organic products
are structural.
But when it comes to the customisation and personalisation (hereafter C&P1) of beauty products, to what extent should we consider this as a major structural trend or just a marketing gimmick to please millennial consumers?
We firmly believe that the customisation and personalisation trend will significantly
reshape the beauty industry as it directly drives brand differentiation and
business economics.
Below we will detail how and why.
The C&P trend is driven by customer expectations and enabled by technological innovations
Graph 1. Drivers and enablers of the C&P trend
Three drivers generated by customer expectations
Need for
customer-centric products The growing appeal of customised and personalised beauty
products reflects a change in the expectations of consumers, notably in mature
markets saturated by a standardised offer and overconsumption.
Ethical
considerations C&P enables consumers to select the ingredients
used in the products (trend to offer sustainable, vegan, cruelty-free or
organic products).
Need for
inclusion and diversification Customised beauty makes it possible to fulfil customer
needs that are not addressed by mass-market products (e.g. Afro-Caribbean
haircare, women with darker skin tones).
Two technological enablers
Digitalisation The growing convergence of the online and offline
worlds and the increase in BtoC are paving the way for the development of
customised beauty.
Scientific
advancements and the rise of new industrial technologies The combination of scientific and technological
advancements offers a unique opportunity to obtain consumer data, analyse it
and understand consumer needs in order to create fully tailored beauty
solutions. The significant strategic value of consumer data is greater than
ever for beauty and personal care companies.
The combination
of these two enablers materialises through five main solutions or ways of
operating that companies have implemented in their C&P strategies.
1. High-tech beauty
In the wake of personalisation through
algorithms, several major players are developing high-tech beauty products
providing customers with a complete personalisation experience. These companies
use artificial intelligence, augmented reality or even 3D printing to be at the
forefront of beauty technology.
To illustrate, L’Oréal presented a new device at the 2020 Consumer Electronics Show called ‘Perso’, which is expected to be launched in 2021. This device creates high-end personalised skincare, lipstick and foundation products. The product operates in four steps: (i) a personal skin assessment is conducted thanks to the ModiFace technology (artificial intelligence); (ii) the user’s local environmental conditions are then assessed by the device thanks to geo-location data; (iii) the user is able to customise the product formula for specific wants or needs; and (iv) eventually, the device produces the cosmetic product taking into account all of the required parameters.
2. Personalisation through algorithms
An increasing number of beauty and personal care
players offer personalised cosmetics created by algorithms. Customers usually
answer a questionnaire or undertake an assessment to ascertain their needs,
whether it be online or in store. Answers and/or results are then analysed by
algorithms to determine the product formula that best matches their individual
characteristics.
For example, the French brand IOMA offers
personalised skincare cosmetics based on an online questionnaire or an in-store
skin assessment. An algorithm will automatically recommend the ideal formula
from more than 33,000 possible combinations. Information on consumers such as
skin assessments enrich IOMA’s skin ‘Atlas’, a database which summarises,
compares and samples skin data to develop new skincare solutions.
3. Face-to-face consultations
In order to find the most appropriate cosmetics
for each individual, some brands have put in place face-to-face meetings with
experts to help customers create personalised products tailored to their
specific needs.
As part of its Technology Incubator, L’Oréal
launched Color&Co, a direct-to-consumer brand specialised in personalised
hair-colouring kits, in 2019. Its value proposition lies in a ten-minute free
video chat with a specialised colourist, who creates a personalised kit adapted
to the customer’s wants and hair specificities, which have been described
previously in a short questionnaire. The product is then directly shipped to
the customer’s door and contains everything needed for the customer to dye his
or her hair at home. Face-to-face consultations therefore provide consumers
with personalised cosmetics that aim to answer the growing demand for inclusion
and diversification.
4. Mix & Match products
Several brands are currently offering ‘Mix &
Match’ products, which allow customers to make a choice between all available
components and to build customised products matching their own expectations.
For example, Guerlain launched ‘Rouge G’ in 2018. It
is a customisable lipstick offering customers the possibility to choose their
lipstick colour from 30 available shades and to select their favourite lipstick
case from 15 different proposals. Therefore, Mix & Match solutions enable
consumers to express their own individuality and can be used as a means to
better retain customers through a co-creation process.
5. Chatbots
Chatbots have been increasingly used on company
websites and on social media in order to provide customers with a more tailored
approach to service. Indeed, chatbots usually direct a consumer towards an item
that he or she might enjoy. They occasionally work together with augmented
reality technology, which enables customers to try beauty products virtually before
buying them.
By way of illustration, French makeup retailer Sephora launched a smart beauty bot, Sephora Virtual Artist, allowing customers to try on a wide range of makeup products instantly (lipsticks, eyeshadows, eyeliners, etc.) by uploading a selfie into the corresponding app. Having benefitted from a customised user experience, customers can then purchase their favourite products directly on Sephora’s mobile website. These five solutions differ in terms of the initial investment required, the complexity of their implementation and the degree of personalisation (see graph below).
Graph 2. C&P solutions in the beauty and personal care market
Successful C&P operations should lead to more profitable business economics
Beauty companies expect C&P to generate a large positive economic
contribution, which should improve their profitability significantly and
structurally.
Capture retailer
margins via disintermediation
The personalisation business model is based on building a direct
relationship with the consumer. This is revolutionary for beauty companies as
personalisation tools and platforms enable them to circumvent traditional
retailers and capture their distribution margins. Indeed, the trade-off between
incurring additional distribution costs and saving profit from retailers is beneficial
to them.
Invoice a price
premium
C&P mechanims also provide significant potential for price premiums: consumers perceive the value of customised and personalised products to be higher. The analysis of several product samples representing various C&P solutions reveals that the applicable price premium increases with the degree of personalisation offered. On average, the price premium charged for these products is found to be close to +50% of the reference product (see graph 3).
Graph 3. Premium charged analysis on degree of personalisation
These price
premiums further take into account business model and cost structure
adaptations required to shift from a mass-market to an individual on-demand
business model. To fully capture the underlying value of the C&P trend,
beauty companies would have to invest in solutions up front and may also incur
higher production and distribution costs.
Increase
consumer base, enhance loyalty and increase purchase order frequency
The shift from product centricity to customer centricity and thus
tailored solutions for customers is based on the increased quantity and
spectrum of data provided by final customers. The data collected goes beyond
the traditional direct contact details (email, phone number, home address,
birthday, etc.) as customers are required to input their individual
specifications, such as skin tone, product preferences (colours, shades, etc.),
product expectations, appetite for natural products and more. Providing truly
individualised solutions to customers has a positive impact on customer
acquisition and loyalty and further increases barriers for the customer to leave
and use other brands.
Further, the availability, subsequent analysis and use of this precise
consumer data provides an opportunity for beauty companies to develop and
implement their own BtoC business models. This not only makes it possible to
bypass traditional retailers, but also makes it possible to implement personalisation-based
subscription models. Such models are already being implemented in the beauty
space with, for example, ‘The Dollar Shave Club’ and even in other FMCG sectors
with, for example, Nestlé’s ‘Tails.com’, a personalised pet food subscription
concept. These models enable companies to increase consumer purchase order
frequency by automating the ordering process.
A successful C&P strategy can double the LifeTime Value (LTV2) of one client
There is a lot of value to be created by
addressing the C&P trend driven by the points mentioned above, that is, capturing
retailer margins via disintermediation, benefitting from price premiums (see graph
3) and enhancing consumer loyalty and therefore increasing purchase order
frequency (see graph 4).
Whilst
price premiums may seem to be the most evident source of value, we found that consumer
acquisition & loyalty, as well as disintermediation are the key drivers of lifetime
value linked to the business models focused on C&P.
Additionally, beauty companies will be required to make initial investments and organisational efforts to foster innovation, build industrial capacity, and develop and maintain digital BtoC platforms. These investments may seem expensive from a business perspective at one point in time, but the opportunity cost of doing nothing may prove to be more expensive: beauty companies may lose relevance in the eyes of the customer and subsequently lose sales and market share.
Graph 4. Impact of the personalisation and e-commerce on LTV
Ultimately, the C&P trend is not a marketing gimmick but a major economic repositioning of the industry. By transforming their business models, beauty companies can leverage on this trend and create significant lifetime value.
____________
1 Lifetime value (LTV) corresponds to the monetary value of a customer relationship, based on the present value of the projected future cash flows from the customer relationship.
2Whilst customisation refers to specific changes performed by an end-user to adapt a product to his or her specific needs, personalisation is done by the system itself, which will identify customers and provide them with content matching their own characteristics.
Sophie Chassat Philosopher, partner at Wemean
The crisis has forced us to stop looking at things and finally see them. Let’s share a few words on this distinction, which comes from the philosopher Bergson. Most of the time, we put labels on situations, enabling us to quickly identify them and move on to action. To paraphrase Bergson: when we look at an object, usually, we don’t see it; what we see are the conventional signs that enable us to recognise the object and distinguish it practically from another, for convenience.1 However, as Bergson would go on to say, it is only when we pay attention to the uniqueness of things that we can really see them – and therefore measure their singularity in order to provide an adequate response, to adapt and to truly innovate.
By plunging us into an unprecedented situation, the crisis has shattered our preconceived filters. At first blinded, our eyes have gradually been opened. We have seen the dysfunctions that we previously considered normal. Remote working has become some sort of optical apparatus, a veritable telescope helping us to put many things into perspective: by seeing ‘at a distance’ (the literal meaning of the prefix tele) the way we work, we can measure, for example, the importance of direct human contact, as suggested by Frédéric Duponchel in his editorial.
Above all, we have started to explore our blind spots and hidden regions – these zones that can be identified by the ‘Johari window’2 , a matrix that reminds us of our individual perspectives and biases. Each individual, just like each organisation, has his or her ‘arena’ (known to self and known to others), ‘façade’ (known to self but unknown to others), ‘blind spot’ (unknown to self but known to others) and ‘unknown’ (unknown to self and unknown to others) – it is the exploration of this last zone that the crisis has made possible, or rather necessary. We should note that to realise this exploration, numerous organisations lean towards the clarification of their ‘vision’: the fact that topics like the ‘raison d’être’ and the ‘mission’ remain high on the company agenda shows the fundamental need to adopt new ways of seeing one’s business.
To train for this new way of seeing, reading a recently published work of art history alone qualifies as an ocular workout: in Le Strabisme du tableau. Essai sur les regards divergents du tableau3 , Nathalie Delbard invites us to take a fresh look at classical portraits and discover that numerous subjects in the pieces have a slight squint, not because of problems of sight, the author explains from the outset, but because the painters thus encourage us, the viewers, to shift our gaze off-centre. Our points of reference are wavering, but new perspectives are opening up. As Apollinaire put it, ‘Victory above all will be / To see well in the distance / To see everything / From close / And let everything have a new name’. 4
Sophie Chassat is a philosopher, a partner at the advisory firm WEMEAN and a corporate director. She works on strategic issues linked to the contribution of business projects: defining them, activating them operationally and determining their impact on governance.
____________
1Bergson, Madrid conferences on the human soul (1916) in. Mélanges.
2The Johari window was conceptualised by Joseph Luft and Harrington Ingham in 1955 to represent (and improve!) communication between two entities.
3From L’incidence Editeur, 2020. The title can be roughly translated as ‘The squint in works of art. An essay on divergent gazes in works of art’
4“La Victoire”, in. Caligrammes (1918). The original French: ‘La Victoire avant tout sera / De bien voir au loin / De tout voir / De près / Et que tout ait un nom nouveau’
Consequences of the development of green finance for companies
Franck Bancel Academic adviser, Accuracy
Since the Paris Agreement was signed in 2015, the fight against global warming has established itself at the top of the agenda for many companies. The reduction of greenhouse gas emissions has become a priority, requiring the implementation of new management systems. In this context, so-called green finance, which enables environmentally friendly project financing, is gaining traction. The development of green finance has major consequences for companies and raises multiple questions: how can we define the concept of green finance? What does it mean for companies? What is the role of the financial sector? What financial instruments have been specifically developed to meet company needs?
What is green finance?
Green
finance groups all financial activities that contribute to the fight against
global warming. For this reason, it is also called ‘climate finance’ or ‘carbon
finance’. It is not ‘sustainable finance’. Sustainable finance, which has a
broader definition, prioritises responsible investment (RI) and adds
environmental, social and governance (ESG) criteria to purely financial
criteria.
Green
finance calls into question one of the major principles followed by financial
analysts. Traditionally, finance has no other objective than to facilitate the
allocation of resources to the most profitable projects, without consideration
of their impacts on the environment. By contrast, for green finance, only the
projects that favour the transition from fossil fuels should be considered.
This does not mean that the notion of profitability ceases to exist; nothing
prevents companies from choosing the most profitable projects from amongst the
green projects available; what changes is the order of priority. The search for
profitability is now subordinate to the green nature of the investment.
What is climate risk for companies?
As Mark Carney explained in his famous speech
from 2015 on ‘Breaking the Tragedy of the Horizon’, climate risk can be broken
down into three distinct risks. First, the occurrence of extreme climate- and
weather-related events (hurricanes, droughts, etc.) may generate a physical
risk that materialises through the destruction of certain assets and losses of
activities for companies. Transition risk is linked to regulatory changes
decided upon by public authorities, which may lead certain companies to call
into question their economic model or even to disappear altogether. Let us take
the example of the automotive sector: because of regulatory changes, the
manufacture of combustion engines (petrol or diesel) will diminish drastically
in the decade to come, even though these engines utterly dominated just a few
years ago. Finally, liability risk related to non-compliance with environmental
legislation may generate significant financial damage and interest. We can
imagine that in the more or less distant future, companies may be pursued
legally for endangerment of others, as were, for example, tobacco companies.
At first glance, one might consider that the
majority of these risks will not materialise in the short term and that
companies have the time to adapt. We think, on the contrary, that companies
must anticipate these risks and quickly implement the appropriate management
processes to deal with them. Certain sectors must adapt now: their longevity is
threatened. In this way, in the oil and gas sector, certain major players have
started to invest massively in new sectors (batteries, electricity, etc.) and
to diversify their operations significantly. As for the less polluting sectors,
the need to reform may be less urgent, but the trend remains the same. Large
groups will gradually force their subcontractors to reduce their carbon
footprint and pressure will be high on SMEs. Access to financing in good
conditions will also require compliance with emission criteria (and more
generally with ESG criteria). This is what banks explain, as their credit
distribution models develop in this sense. The image and value of a company’s
brand is now inherently linked to its ability to contribute to the fight
against global warming.
How
can companies manage climate risk?
Climate risk is not the subject of a
centralised management process in the majority of companies. Today, two large
departments are involved in the management of climate issues: the sustainable
development department ensures the operational management of projects
compatible with the fight against global warming. This means enabling the
company to comply with its climate commitments by proposing operational
solutions to reduce its carbon footprint throughout the value chain. For
example, can the company replace one material with another whose production
emits fewer greenhouse gases, without altering the quality of the final
products? How can the more virtuous suppliers in terms of emissions, etc. be
selected? The finance department centralises the information and produces
financial and extra-financial reporting in relation to environmental
performance. Climate reporting will become a central part of a company’s
financial communications in a context where financial information will become
standardised under pressure from the financial community and public
authorities. Investors request more and more information to evaluate not only
emissions but also all negative externalities. In the years to come, the
sustainable development and finance departments will have to cooperate further
and produce together new indicators that incorporate both financial and
environmental performance.
Moreover, the companies in certain sectors (power plants, manufacturing
plants, etc.) are subject to emission ceilings. They are granted a certain
quantity of emission rights (quotas) but can purchase further rights on the
market if they find that they do not have enough (or indeed they can sell their
rights if they find themselves with an excess). The European Union has
committed to a policy to reduce the number of quotas available, which should
automatically generate an increase in their value over time and result in new
constraints for companies.
What is the role of the financial sector?
For the financial sector, the aim is to redirect activity as a priority towards projects compatible with the fight against climate change. Most of the large players in finance, whether banks or investment funds, have made commitments to reduce the carbon footprint of their portfolios. As a result, some banks have stopped financing companies that operate in the coal sector. More generally, one may question the financing of ‘fossil fuel’ companies; the continuation of their activities would challenge the objectives set to limit global warming (certain writers talk about ‘stranded assets’ to discuss these fossil fuel assets). Banks are now obliged to undertake climate stress tests and measure the impact of climate risk on their solvency. Article 173 (paragraph VI) of the Loi sur la Transition Energétique pour la Croissance Verte (Law on Energy Transition for Green Growth) requires portfolio management companies to publish information on the consideration of their ESG policy and therefore on the consequences of their investments on the climate.
To help investors better grasp this new
environment, public authorities have implemented ecolabels in various countries
that require labelled funds to invest significantly in green assets. This is
the case in France with Greenfin, in Luxembourg with LuxFLAG Environment and
LuxFLAG Climate Finance, and in Nordic countries with Nordic Swan Ecolabel.
These labels are backed by a taxonomy that defines what a green economic
activity is. The taxonomies play a major role in this respect because they
guide investors in their investment decisions. The European Union has created a
draft taxonomy that distinguishes between carbon-neutral activities (low-carbon
transport, etc.), transitioning activities (building renovation, etc.) and transition-facilitating
activities (production of wind turbines, etc.).
What
are green financial instruments?
In this context, new financial instruments have
been developed by markets and banks with the aim of promoting the transition to
greener energy sources. For example, green bonds have experienced spectacular
growth in the past few years. They are bonds from which the funds collected
must exclusively be used to finance or refinance, in whole or in part, green
projects. For a company, issuing green bonds generates significant additional
costs (administrative costs linked to the issue process, legal costs, audit
costs, reporting costs, higher mobilisation of staff, etc.) for a very limited
reduction in the cost of financing. According to financial literature, the
additional costs equate to seven base points, whilst the premium is only two
points. However, issuing green bonds enables companies to increase their
investor base, secure the issue even in difficult market conditions and
generate organisational gains (better cooperation between the finance and
operational teams, increase in competence of the finance teams on subjects
linked to ecological impact, etc.). Green bonds are not the only green
financial instruments that have been developed. Banks, for example, have
started to securitise green assets (that is, issue securities on the market
whose value is based on the repayment of green loans). The development of this
market will depend, however, on regulators, which may reduce the capital costs
of the banks that finance this type of loan or even make the financing costs of
‘brown’ assets more expensive.
In conclusion, the fight against climate change has created in only a few years a new paradigm. For a company, considering that it need not pay any attention and just continue business as usual seems like a risky choice. However, although the route has been mapped out, a great many questions essential to the deployment of green finance projects and tools remain unanswered. The transition to greener energies is particularly technical; the physical measures, just like the financial ones, are either subject to disagreement or considered insufficient. Convergence is expected to take place in the coming decade and will further accelerate the changes under way.
Gillian Tett, one of the chief editors at the Financial Times, commented earlier this year that people in New York found it harder to part with their Christmas trees after the holiday period. Has the COVID crisis really changed our relationship with time and space? Private and professional life is intertwining, just as the line between home and office is blurring. Are our points of reference changing? Will we find them again when the pandemic has finally been put behind us?
We should keep in mind this warning about possible behavioural changes taking place when we wonder what 2021 has in store for us. Of course, we should start this forward-looking exercise by taking a look at the macroeconomic forecasts. They bring hope. The IMF has revised its figures for global growth upwards: +0.3 points to 5.5%, after -3.5% in 2020. At the IMF, they seem to think that the loss of economic activity generated by the health crisis is going to be more than compensated! Can we say then that everything is going back to normal, back to business as usual?
No. And we must consider other approaches to better understand the upcoming period.
Let’s stay on macroeconomic territory for a while and note a few points:
1. Recovery remains highly conditional upon developments on the health front. If the decline in the pandemic is delayed even by only a few months, the first half of the year will be lost to the recovery; performance for the whole year will clearly be affected. Taking the example of the eurozone, we can see that growth fell by over 7% in 2020. Under the commonly accepted idea that there will be a net decline in the pandemic from spring, the economic rebound could reach between 4% and 4.5% this year. Delay the decline by just three months and a third of this growth would be cut!
2. The big growth figures that we’re talking about shouldn’t mask the point that it will take time to get back on the track that was expected before the pandemic. According to the World Bank, by 2022 there will be a shortfall of four trillion dollars in wealth creation. This is more or less the size of the German economy and is not something to be sniffed at. Should we fear being ‘condemned’ to another episode of slowdown in potential growth after a major crisis, even if its origin is neither economic nor financial? To make sure that we don’t have to respond in the affirmative to these questions, committing to a recovery policy that prioritises supply over demand seems essential. Will this be the case?
We must also ask ourselves what lies behind
these figures, which retrace the developments of very broad economic
aggregates. In difficult times such as these, we often see, behind the
averages, an increase in standard deviations. This means that certain
households, certain businesses and certain countries are suffering more. The
least qualified have been the most affected by the downturn in the labour market.
How much time will it take for any improvement in employment to reach them?
It’s also clear that prospects are not the same for a small business in the
tourism sector as for another in the digital sector with global activities.
Finally, a country heavily involved in manufacturing industries and with
significant room for manoeuvre in terms of supportive policies (Germany, for
example) is in a better position than another specialised in labour-intensive
services and constrained by long-deteriorating public accounts. We must wonder
about the economic, social and political implications of this divergence. Are
we heading towards less growth (convoy theory?), more inequality and ultimately
less harmonious societies – both internally and with others – which are therefore
more difficult to manage? If this is the case, what measures should be taken to
counter these risks?
We should also consider the changes in behaviour brought about by the crisis:
1. A whole series of innovations already in progress are accelerating, whether it be digitalisation, distance selling, remote working, telemedicine, artificial intelligence or biotech. Certain sectors (transport services and upstream industrial branches, for example) will have to reinvent themselves.
2. Households and companies may change their trade-off between spending and saving: more caution, just in case, and so more savings? The economic and financial implications of such a change would be significant, namely a declining investment trend and interest rates coming to rest once again one notch lower than before.
3. Those responsible for public policy are therefore facing a complicated environment to grasp in all respects: they must manage the past (a heavy and high public debt) and prepare for the future (facilitate the structural changes towards the energy and environmental transition and also towards digitalisation). But what will the consequences ultimately be for productivity and growth profiles or the financial performance of companies? How much time will it take for all this to be visible, if it happens?
When we can’t see tomorrow very well, it’s
only human to hang on to what we know – yesterday. But this ‘back to basics’ only makes sense as a
springboard to dive into the new opportunities provided by a changing world: after
a crisis, it’s often out with the old and in with the new. Let’s keep our
Christmas trees longer than usual if it makes us feel better, but let’s make
sure to keep an eye open for the weak signals of a changing world. That is how
we progress!
Accuracy is pleased to announce that fourteen of its experts have been named among the leading Arbitration Expert Witnesses in the Who’s Who Legal: Arbitration 2021.
Through nominations from peers and clients, the following Accuracy experts have been recognised as the leading names in the field:
Who’s Who Legal identifies the foremost legal practitioners and consulting experts in business law based upon comprehensive, independent research. Entry into their guides is based solely on merit.
Accuracy’s forensic, litigation and arbitration experts combine technical skills in corporate finance, accounting, financial modelling, economics and market analysis with many years of forensic and transaction experience. We participate in different forms of dispute resolution, including arbitration, litigation and mediation. We also frequently assist in cases of actual or suspected fraud. Our expert teams operate on the following basis:
• An in-depth assessment of the situation; • An approach which values a transparent, detailed and well-argued presentation of the economic, financial or accounting issues at the heart of the case; • The work is carried out objectively with the intention to make it easier for the arbitrators to reach a decision; • Clear, robust written expert reports, including concise summaries and detailed backup; • A proven ability to present and defend our conclusions orally.
Our approach provides for a more comprehensive and richer response to the numerous challenges of a dispute. Additionally, our team includes delay and quantum experts, able to assess time related costs and quantify financial damages related to dispute cases on major construction projects.
A review of recent economic figures leads us to the conclusion that this crisis may well be the most brutal and precipitous in recent history. In fact, studies show that when compared with the financial crisis of 2007–2009, the current crisis generated the same level of stress for economies in only six months instead of two years.
One of the most striking characteristics of this crisis is its diversity of impact, whether at the citizen, business or country level.
Accuracy, the global independent advisory firm, has promoted two new partners as part of its continued growth. Charlene Burridge, in London and Florence Westermann in Accuracy’s Paris office. These promotions bring Accuracy’s total number of partners to 52, across 13 countries.
Accuracy assisted Korian in the structuring, modelling and conclusion of a major real estate partnership with BNP Paribas Cardif and EDF Invest. This long-term partnership relates to a pan-European vehicle of 81 health assets that will be controlled and managed by the Korian group.
Accuracy is pleased to have sponsored the BFM Awards 2020 and to have awarded Bris Rocher, CEO of Groupe Rocher, with the Entrepreneur of the Year Award.
Click here to watch the video of the awards (in French).
Accuracy
conducted buy-side financial due diligence on behalf of Bpifrance, in advance
of its investment of €8m in Coretec Group, a French family-owned group and
expert in the engineering and manufacturing of tailor-made equipment for the
automotive industry. It is the first investment of the Fonds Avenir Automobile
2 (FAA2), created to support automotive suppliers and managed by Bpifrance.
Coretec
Group employs 360 people across four sites located in France, Poland and the
Czech Republic and has achieved revenues of €33m as of 31 March 2020.
Report highlights Accuracy’s global reach, cross-disciplinary skills, innovative technology
Accuracy
has been named in Global Investigations Review’s GIR 100 2020, an independent
guide to the world’s top 100 investigations firms. The GIR 100 is based on
extensive research with practitioners in the field and identifies those firms
able to handle sophisticated cross-border government-led and internal
investigations.
Accuracy is
also the only France-based firm named by the guide and one of just 12
consultancy firms. The guide highlights Accuracy’s global reach of offices in
over a dozen countries.
The
publication states that “rapid expansion and innovation have boosted Accuracy’s
profile in recent years,” and notes the firm’s addition of “cross-disciplinary
professionals with forensics, economics, technology and law enforcement
experience” as well as innovative use of technology.
GIR
describes Accuracy’s engagements for multinational companies “where multiple
government agencies are investigating allegations of fraud, corruption and
embezzlement” and its frequent partnership with GIR 100 ranked law firms.
“Accuracy is proud to be recognised among this elite group of investigations firms,” said Frédéric Duponchel, Accuracy’s Managing Partner. “Accuracy has assembled a team of highly experienced investigations professionals to help our clients with government and internal investigations, and this recognition demonstrates the dedication our team has brought to this work.”
Accuracy conducted financial buy-side due diligence for Ascom Invest (holding of Group By My Car) in the context of the acquisition of Marcel from Group Renault.
Accuracy is pleased to announce that fifteen of its experts have been named among the leading Arbitration Expert Witnesses in the Who’s Who Legal: Arbitration 2021.
Through nominations from peers and clients, the following Accuracy experts have been recognised as the leading names in the field:
Who’s Who Legal identifies the foremost legal practitioners and consulting experts in business law based upon comprehensive, independent research. Entry into their guides is based solely on merit.
Accuracy’s forensic, litigation and arbitration experts combine technical skills in corporate finance, accounting, financial modelling, economics and market analysis with many years of forensic and transaction experience. We participate in different forms of dispute resolution, including arbitration, litigation and mediation. We also frequently assist in cases of actual or suspected fraud. Our expert teams operate on the following basis:
• An in-depth assessment of the situation; • An approach which values a transparent, detailed and well-argued presentation of the economic, financial or accounting issues at the heart of the case; • The work is carried out objectively with the intention to make it easier for the arbitrators to reach a decision; • Clear, robust written expert reports, including concise summaries and detailed backup; • A proven ability to present and defend our conclusions orally.
Our approach provides for a more comprehensive and richer response to the numerous challenges of a dispute.
For some twenty years, ecological considerations in political decisions on both a national and local scale have led numerous cities across the world to put ‘clean’ mobility at the top of their agendas. This means developing vehicles that emit low amounts of local pollutants (NOx, fine particles, etc.) and atmospheric pollutants (greenhouse gases).
We talk about a ‘clean’ vehicles when they produce little or no polluting emissions, but in practice no vehicle is truly clean. They all emit local pollutants and greenhouse gases during their production, during their use and at the end of their useful lives.
This article deals principally with ‘zero direct emission’ transport (called hereafter ‘zero emission’ or ‘ZE’ for the sake of simplicity) which emits no direct pollution (exhaust emissions), in contrast to decarbonised transport which emits little or no CO2 and depends on the energy mix of each country.
European regulations requiring low- or zero-emission public transport have caused the number of calls to tender issued by cities for these types of transport to grow. In France, the LTECV law (for Loi de Transition Energétique pour la Croissance Verte – Energy transition for green growth law) has scheduled investments in transport infrastructure.
At present, electric battery buses are the most advanced solutions from a technical and industrial perspective for zero-emission transport. Demand for electric battery buses has therefore exploded in Europe, and the capacity of operators to roll out these vehicles in cities, whilst finding the right economic balance, has become a significant strategic challenge.
Upstream, an electric battery manufacturing sector is being established in Europe (i) to meet this demand, (ii) secure supply (currently mostly sourced from China), (iii) create jobs, and (iv) answer an environmental necessity, among other objectives. Indeed, when analysing the entire life cycles, taking into account manufacturing and transportation, if the battery is made in China, the environmental impact assessment of the electric vehicle can be disappointing. However, rolling out an electric fleet of vehicles is complex: it requires a larger initial investment than a classic fleet, both for the acquisition of the fleet itself and for the creation of the necessary infrastructure (adaptation and modernisation of bus stations and depots, recharging power, etc.). It also implies greater operating constraints (recharging time, management of battery performance, etc.). The implementation of these electric public transport fleets therefore requires complex financial and strategic choices from manufacturers, investors and operators.
The in-depth work that we have undertaken and summarised in this article makes it possible to understand the developments taking place in the electric battery sector, but also to identify the main value creation levers based on various scenarios at the level of the battery, the bus or the fleet. It also highlights other trends in the future of mobility, whether from a strategic (new business models) or technological (hydrogen battery) standpoint.
A. Electric batteries are currently 90% produced in Asia (60% in China alone). In light of the significant market growth, the wish to create a certain level of independence and the will to reduce the impact on the environment, a European electric battery sector is emerging, based on several consortia.
B. Production costs per kWh will also reduce thanks to, on the one hand, technological innovations in progress and, on the other hand, mproved recycling techniques and increased battery capacities.
C. Our analysis of the value chain and cost structure of a battery has enabled us to identify the production steps that provide the greatest added value. A quantitative analysis has made it possible to assess value creation levers: smart charging and recycling have proved to be two key points to maximise the economic value of a battery over its entire life cycle.
D. Making strategic choices at certain key steps in the life cycle of the battery are critical to exploit its full potential for value creation. In particular, considering how to reuse the battery at the end of its first life makes it possible to optimise itseconomic potential.
E. An intermediary financial model serving as the link between the producer model and the operator model is under development: Battery as a Service (BaaS). This model gives the historical operator the opportunity to use a battery that is neither sold nor purely rented to him, but made available via a flexible, bespoke contract adapted to his needs at any moment.
F. Moreover, other forms of low- or zero- emission public transport are emerging alongside electric battery vehicles, such as hydrogen fuel cell electric buses (zero emission) or biomethane buses (low emission). There are so many decisions to make for investors, operators and other actors in the sector – decisions that need bespoke strategic support.
INTRODUCTION
New regulations and more accessible prices have given rise to the ambitions of numerous cities to reduce CO2 emissions by putting in place low- or zero-emission public transport fleets. Moreover, the Paris Agreement and certain laws related to energy transition in Europe have established precise objectives for 2025 and 2030, in particular the LTECV (Loi de Transition Energétique pour la Croissance Verte – Energy transition for green growth law) from August 2015 in France. In addition, for some ten years, the improvement in electric battery performance, the diversification of the offer (autonomy, capacity, charging time, etc.), the significant rowth of demand and the reduction of prices have all facilitated the rise of electric mobility.
The zero-emission (electric or hydrogen fuel cell battery) or low-emission (biomethane or natural gas) sector has turned out to be even more strategic in this post-quarantine period linked to COVID-19, which has further highlighted the stakes related to energy transition. As stated by the UN, this crisis ‘provides a global impetus to reach sustainable development objectives by 2030’. However, the path between ambition and implementation is riddled with pitfalls. For example, Paris – via the RATP and Ile-de-France Mobilités – was aiming for a 100% clean bus fleet by 2025, with 80% electric buses (i.e. zero emission) and 20% biomethane buses (i.e. low emission), in the city’s ‘2025 bus plan’.
However, economic constraints are such that today the objective is to replace only two thirds of the fleet with electric buses, the last third being comprised of biomethane (‘biogas’) buses1. These economic constraints concern both the financial investment and the economic and operating models. But let’s start by looking into the current stakes of the electric battery market.
1. THE CURRENT ELECTRIC BATTERY MARKET
A. The rise of a sustainable and competitive electric battery sector in Europe
Over the past ten years or so, the lithium-ion battery market has exploded. Today, two major trends are at play (Figure 1):
• the decrease in the price of lithium- ion batteries, which amounted to $209 per kWh in 2017 and should fall below $100 per kWh by 2025;
• the increase in global roduction capacity, estimated at 13% per annum on average between 2018 and 2030.
Today, global production of Li-ion batteries, all uses combined, amounts to a capacity of around 500 GWh. Asia, and China in particular, is the leader in this sector by far: Chinese production alone represented approximately ten times European production. It follows then that seven of the top ten Li-ion battery manufacturers are Chinese – the leader being the giant CATL – representing capacity of approximately 300 GWh2.
The sub-sector relating to Li-ion battery electric vehicles represents 70% of this market, that is, approximately 350 GWh.
Figure 1: Development of production capacity and prices of all-purpose Li-ion batteries between 2005 and 20303 4 5
And 40% of this sub-sector relates in particular to buses and other commercial vehicles, that is, 140 GWh. This production is also dominated by China, particularly the Chinese company CATL (70%6 of the bus battery market), as the electrification of bus fleets in China was pushed by the government much earlier than in Europe: for example, since 2009 the city of Shenzen has benefitted from government subsidies for the development of its electric fleet.
Though production remains mostly Chinese, the USA and Europe should gain market shares, growing from only 10% of global electric battery production in 2020 to 40% in 2030. This rise in production capacity outside Asia will lead to a better balance between supply and demand. It is therefore a contributing factor to the reduction of prices, gains in factory productivity thanks to economies of scale, and the increase in the capacity of production chains. Tesla’s gigafactory in Nevada, for example, will produce 35 GWh annually in 2020 against 20 GWh in 2018. Similarly, the Swedish company Northvolt, starting with a capacity of 16 GWh, plans to double its factory’s production capacity by 2030 and end up reaching 150 GWh in 2050.
With regard to Europe in particular, the local sector is being built where political risk is low, financial incentives are high and administrative processes are easy. Easy access to qualified labour, reliable energy resources and a secure supply of raw materials are all essential. All of these conditions come together in Europe, where the commitment to transitioning to a low-emission system is strong. The presence of highly qualified engineers is also an advantage for the years to come, in the context of rapid technological developments. All of these elements make Europe a high-potential zone for the production of electric batteries. Indeed, significant political and financial means have been mobilised to give rise to European or transnational projects.
Therefore, as shown in Figure 2, even if Asia remains dominant in the electric battery market, an international rebalancing will take place by 2030, particularly at the European level.
Figure 2: Development of production capacity of Li-ion batteries by region
(location based on company HQ)4
Figure 3 presents the current landscape of cell and battery production in Europe. The significant presence of Asian actors is evident, as well as the European large-scale factory construction projects, aiming to structure a sustainable and economically viable industrial sector.
The EU programme European Battery Alliance (EBA250), launched in October 2017, is made up of 17 private companies directly involved throughout the value chain, including BASF, BMW, Eneris, and especially the joint venture ACC (Automotive Cells Company) between PSA (and its German subsidiary Opel) and SAFT (a subsidiary of Total). They are supported by over 120 other companies and partner research organisations, as well as public bodies such as the European Investment Bank. The aim is to develop highly innovative and sustainable technologies for Li-ion batteries (whether liquid electrolyte or semi-conductor) that are safer and greener, exhibiting a longer lifespan and a shorter charging time than those currently on the market. The EBA250 benefits from €5 billion in private financing and €3.2 billion in public inancing, including €1 billion from France and €1.2 billion from Germany.
Figure 3: Cell and battery production plant projects under way in Europe7 8 9 10 11 12
More precisely, ACC, often nicknamed the ‘Airbus of batteries’, will build a pilot plant in the south-west of France, followed by two cell production factories for electric batteries in the Hauts-de- France region and in Germany. Another major project – the construction of a gigafactory – is being undertaken by the French start-up Verkor13 (notably supported by Schneider Electric) and aims to produce Li-ion cells for southern Europe (France, Spain and Italy) from the end of 2023. This project takes its inspiration directly from the Swedish start-up Northvolt, which raised €1 billion from private investors (including Volkswagen, BMW and Goldman Sachs) to finance the creation of a lithium-ion battery production factory in Sweden. Verkor’s project represents an investment of €1.6 billion and the 200-hectare factory will likely be based in France. Similarly, the Norwegian company Freyr launched the construction of a battery cell manufacturing plant in Norway (€4.5 billion), which will have a capacity of 32 GWh from 2023 and will be one of the largest in Europe.
It is worth mentioning that other projects are under development to build a European battery recycling sector, a key step in the electric battery value chain. Supported by Eramet, BASF and Suez, the ReLieVe (Recycling for Li-ion batteries for Electric Vehicles) project – with a smaller budget of €4.7 million – aims to develop an innovative and competitive ‘closed-loop’ recycling process, enabling the recovery of nickel, cobalt, manganese and lithium for new batteries.
B. Better performance thanks to new conception and recycling technologies, which lead to a reduction in production costs
The technical performance criteria of electric batteries such as autonomy or specific capacity (stored energy by unit of mass) should triple by 2030 thanks to new battery technologies, as shown in Figure 4. Incremental innovations in Li-ion batteries will make it possible in the short term to replace the rare metals used in the manufacture of the electrodes, such as cobalt and manganese, which are too expensive and polluting. The 33% reduction in the use of cobalt, partially replaced by nickel, which is much less expensive, will make it possible to offset the 40% ncrease in the price of cobalt forecast between 2020 and 2030. With 60% nickel, 20% manganese and only 20% cobalt, NMC 622 technology will replace NMC 111 batteries (which ontain 33% cobalt) and will represent 30% of the market in 2030. By 2030, new disruptive technologies are expected, with new cathodes and solid electrolytes in particular, reatly increasing the reliability of the battery. Current batteries that use a liquid electrolyte work efficiently at room temperature and over a range between 0°C and 45°C14; a solid electrolyte, however, enables a wider range of use, between -20°C and 100°C15. In addition, Samsung has recently patented a battery in which the cathode and the anode are covered with graphene balls; its recharge time is five times quicker. As for batteries with silicone anodes, they have greater capacity thanks to the replacement of usual graphite anode with an anode in silicone derived from the purification of sand.
Figure 4a: Development of battery technologies up to 203016 17
Figure 4b: Development of market share of the different Li-ion battery technologies up to 2030
Ultimately, recycling costs should fall as understanding of current techniques (hydrometallurgy and pyrometallurgy) advances. A new, much less expensive technique is currently under development: the ‘direct recycling’ process. In this process, the electrolyte and the materials making up the cathodes are recovered to be reused directly with no metallurgical treatment necessary. Figure 5 below shows the advantages and disadvantages of each of these recycling methods.
Figure 5: New recycling methods: less expensive and more environmentally friendly solutions18 19 20
The combination of these elements (improved performance, reduction in proportions of rare materials, new recycling processes) will enable a drastic reduction in production costs by 2030, making the electric battery market a promising sector for investors. Our cost structure model (cf. Figure 6 below) indicates that by 2030 the production cost of an NMC 111 battery will decrease by at least 25% compared with its current level. For future battery technologies, this reduction will be greater. For example, Tesla has announced a 56% reduction by 2022 in the production cost per kilowatt-hour of its new batteries thanks to a series of technical innovations.
However, though costs are expected to fall significantly, the financial equation for electric vehicle fleets remains complex. Our analysis of the life cycle of a battery, its cost structure and its performance factors makes it possible to identify certain value creation levers that could make all the difference for transport operators.
Figure 6: The cost structure of a battery (NMC 111) makes is possible to anticipate its production costs by 2030
2. MAXIMISING THE VALUE OF A BATTERY THANKS TO THE DETAIL OF ITS COSTS THROUGHOUT ITS LIFE CYCLE
A. A cost structure that reveals the stages with the highest added value in the manufacturing cycle of a battery
The electric battery value chain can be broken down into several stages (Figure 7): supply of raw materials, manufacture of basic chemical components, conception and production of cells generating electrical energy, conception and production of modules, manufacture of packs (protection against shocks, vibrations), integration of the battery into smart control and performance management systems (battery management system), and, finally, recycling of components and metals at the end of their useful lives. This last stage shows that batteries still have value, even at the end of their useful lives.
Figure 7: Value chain of an electric battery: stakes and challenges21 22
To determine the cost structure of a battery, we have analysed each stage to determine its impact on the value of a new battery. Four types of expenditure appear at each stage: purchase costs (raw materials or components), labour costs, R&D costs and fixed costs (expenditure linked to electricity or additional material necessary for the conception of the cells).
The stage related to the manufacture of basic components is the most expensive (26% of the total cost) because it concerns the various elements making up the electrodes and the solvent contained in the electrolyte. The integration of the battery into a smart system is also a crucial step (22%) due to the importance of the software in monitoring the performance of the battery, which requires a significant investment in R&D. This stage also provides the most added value insofar as the increase in the level of production will not lead to an explosion in R&D costs – these will already have been incurred. Finally, the cell conception and production stage is the third most expensive. It is characterised by high R&D and labour costs.
Figure 8: Value chain of an NMC 111 battery in 202023 24
B. Identification of the key stages in a battery’s life cycle to maximise its valuee
The state of health (SoH) of a battery is an indicator that helps to optimise its use. Mobility contracts with electric bus operators generally stipulate an SoH of between 100% and 80%. Beyond this limit, the battery cannot be used with the same level of security and efficiency – it is the end of its first life. The battery is therefore at a critical moment in its life cycle, where choices must be made: if the battery performance allows it, it can be used again in another contract; it can be allocated to an stationary energy storage unit in its second life (to balance the grid, for example); and it can be sold at the end of its useful life to be recycled, where certain components will be refined to be reused.
Figure 9: Life cycle of an electric battery (based on SoH)
The state of health of a battery makes it possible to evaluate its state. Four factors can lead to the deterioration (decrease in capacity and increase in internal resistance) of a battery:
• Temperature (T): extreme temperatures negatively affect the state of health of a battery. At high temperatures, the internal activity of a battery increases, thereby reducing its capacity; below 0°C, internal resistance increases considerably, thereby accelerating its ageing25.
• The charge and discharge rate (C-rate): this corresponds to the intensity of the electric current going through the battery. The higher it is, the quicker the battery will age.
• The state of charge (SoC): this relates to the proportion of energy stored by the battery compared with its total state of charge. The capacity of a battery deteriorates not only during charge/discharge but also, to a lesser extent, when it is not used or stored if it is not empty. The storage of batteries with a relatively low SoC is therefore recommended to limit their deterioration. To optimise their length of life, recharging batteries to 100% should be done occasionally to balance the cells.
• Depth of discharge (DoD): this represents the percentage of energy that has been lost by the battery since its last recharge and therefore characterises its charging profile. The greater the DoD, the quicker the battery will deteriorate. According to the type of battery used, the optimal DoD (hardly possible operationally!) varies between 50% and 70%.
Knowing the deterioration factors of a battery makes it possible to anticipate this deterioration based on its use, its technology, the monitoring of its performance and its conservation. For example, charge and discharge modes vary greatly depending on whether the battery is used in urban or semirural environments. A semirural use would lead to greater deterioration due to the distances travelled, requiring more frequent and rapid recharges.
Based on these factors, we have highlighted value creation levers able to be used to control and maximise the value of the battery throughout its life cycle. These levers concern the optimisation of a battery’s use, the management of its performance and the management of used batteries.
Figure 10: The ten value creation levers of an electric battery
One of these levers is smart charging, that is, smart and innovative technology making it possible to recharge electric buses at the optimal time: not saturating the grid with demand for electricity, avoiding peaks in demand from both households and electric vehicles at the same time, for example.
A second interesting lever concerns improving recycling techniques, leading to a reduction in recycling costs. Indeed, the continued improvement of current techniques (hydrometallurgy and pyrometallurgy) and the emergence of new efficient techniques (the ‘direct recycling’ process) contribute to the prolonged use of the battery into a second life, followed by its recycling, instead of a shorter use that would be limited to the first life of the battery followed by its sale.
Finally, a third lever consists of managing battery performance, and therefore the know-how related to performance monitoring. ‘Maintenance’ contracts are proposed by battery suppliers. As part of these contracts, certain parameters (SoC, DoD, C-rate, charge intensity, temperature during charge/discharge, etc.) are measured via a battery management system (BMS) to monitor performance: the battery undergoes several charge and discharge cycles under varying conditions and the analysis of the data collected by the BMS can lead to the battery’s replacement if it has deteriorated too much or if the conditions of use no longer comply with the conditions of the contract, particularly those related to safety. But this performance monitoring is currently proving to be more a matter of insurance than of maintenance in the strict sense of the word. That is why a value creation lever would be to renegotiate the contract to bring it closer to the real costs of monitoring performance or even internalising this know-how, more for strategic reasons than for financial ones. Indeed, controlling operating data and battery performance data in real time is crucial because it makes it possible to adapt battery technologies as closely as possible to the use made of them. It should be noted, however, that this last lever is only applicable with great difficulty at present, as numerous battery manufacturers do not allow their clients to internalise this service.
To illustrate all of this, we have modelled in the example below the effects of different levers on a fleet of 25 buses in both an urban and a semirural context. The options analysed are as follows: smart charging or not during the first life; resale of the battery or reuse in a new contract at the end of the first life; reuse in energy stationary storage infrastructure in the second life (as reserve capacity in this particular case). We note that:
• smart charging creates value systematically and, moreover, has the benefit of being simple to implement;
• frequency regulation is not worthwhile, due to high investment costs, a second life that is too short, and an energy resale price that is too low in France;
• the use of a new contract at the end of a battery’s first life, rather than reselling it, is appealing in an urban scenario because the battery deteriorates more slowly than in a semirural scenario.
There are so many operational decision-making factors to take into account that have a veritable impact on the economic model of electric fleets. That said, beyond these levers enabling operators to optimise the performance of their batteries, there are still other avenues to explore in the face of the complexities of the classic electric bus model: the first consists of a new financial and operational management model for these buses; the second consists of alternative modes of low-or zero-emission transport.
Figure 11: NPV calculation for an NMC battery based on its use and the use of certain levers26 27
3. NEW PERSPECTIVES IN THE MANAGEMENT OF ZERO-EMISSION BUSES
A. The emergence of new economic models: the BaaS model
Despite significant technological advances and the expected reduction in the production costs of electric batteries, technical constraints remain substantial for electric transport operators. First, capital expenditure is higher than for classic vehicles (50% higher than for a diesel fleet28). In addition, performance control, battery maintenance and decisions to be made when battery efficiency is reduced are complex parameters to implement for historical bus operators. In this context, the emergence of the battery as a service (BaaS) model almost seems obvious.
Battery as a service basically frees transport operators from the constraints and risks associated with the management of a battery. The service provider takes care of all aspects linked to the battery’s use, from its certification (in compliance with safety and environmental standards) to performance monitoring to recycling; the service provider also ensures that the service provided complies with the expectations of its client, the transport operator, at all times, with a view towards value optimisation. The service provider therefore has to find the optimal contract and use profile for the battery, depending on the stage of the battery’s life cycle – and therefore its performance – at any given moment. It is its understanding of the different value creation levers, as well as its in-depth knowledge of battery performance, that enables the service provider to determine the ideal client or contract profile adapted to its battery. Some of the most well known BaaS companies include Global Technology Systems, Yuso, Swobbee and Epiroc.
Figure 12: Three different business models
B. The development of new low- or zero-emission means of transport
Figure 13: Forecast number of electric and hydrogen buses up to 2025
In parallel with the rise of electric battery buses, other clean means of transport are under development, such as low-emission buses running on biomethane (biogas) or zero-emission buses running on hydrogen. These technologies are growing substantially across the world, despite differences in their level of maturity, depending on the country.
Based on the local energy source, biogas buses constitute a low-emission technology (reduction of 25% of emissions of toxic fumes compared with petrol vehicles), which has the advantage of an excellent level of autonomy and a short recharge time. However, the infrastructure to be put in place is substantial and expensive.
ZE electric buses (battery- or hydrogen-based) are two complementary technologies. Indeed, hydrogen technology (which is more expensive) becomes more relevant where the battery technology reaches its limits or in future cases (grid saturation, for example).This zero-emission technology provides a high level of autonomy and relatively short recharge cycles (Air Liquide estimates that a hydrogen bus can be recharged in less than 20 minutes29). Nevertheless, the required infrastructure is considerable (hydrogen recharge stations) and the network is virtually non-existent or only at its inception in the majority of large cities today. However, as numerous French cities have shown an interest in this technology by launching pilot projects, the government’s recent recovery plan following the COVID-19 health crisis will dedicate more than €7 billion over ten years to this energy of the future, aiming to build factories that are able to produce the electrolyser in particular (the electrolyser makes it possible to transform hydrogen into electricity via the electrolysis of water). The hydrogen plan forecasts financing of €1.5 billion to develop a hydrogen sector similar to that being undertaken for electric batteries – this is in cooperation with Germany.
Figure 14: New types of low- or zero-emission mobility30 31
CONCLUSION
The main challenge facing the development of the electric battery sector is multiplying supply considerably to be able to match the significant increase in demand. This project is currently materialising through the creation of a sustainable and competitive battery manufacturing and recycling industry in Europe.
In parallel, battery technologies are improving, with batteries gaining in autonomy and specific capacity. Recycling methods are also the subject of critical technical innovation, which should lead to a significant reduction in total production costs by 2030.
However, constraints remain significant for electric mobility players: the amount of capital expenditure, the control of battery performance and the complexity of decisions to be made when their efficiency starts to deteriorate are all parameters that have favoured the emergence of new economic models of battery use, such as the BaaS model, as well as other modes of clean mobility that should be closely monitored, such as the hydrogen bus.
These developments, in economic model and technology, should lead historical players and new entrants in the zero-emission transport sector to change their strategy and investment policies.
In this phase of significant transformations for the whole sector, Accuracy has developed a strategic support framework in order to help these players to identify and seize the truly sustainable and profitable opportunities in the value chain.
1 De moins en moins de bus électriques dans la future flotte de la RATP, Ville Rail & Transports, Marie-Hélène Poingt, 04.03.2020
2 https://www.energytrend.cn/news/20191014-76629.html, Institut de recherche de point de départ (SPIR)
3 Lithium-ion Battery Costs and Market, Bloomberg New Energy Finance, 05.07.2017
4 Developing a promising sector for Quebec’s economy, Propulsion Québec, avril 2019
5 Roadmap Battery Production Equipment 2030, VDMA, 2018
6 http://escn.com.cn/news/show-711124.html, China Energy Storage Network
7 Comment la filière des batteries pour véhicules électriques tente de se structurer en Europe, L’Usine Nouvelle, 06.09.2019
8 CATL starts building battery plant in Germany, electrive.com, 19.10.2019
9 LG Chem battery gigafactory in Poland to be powered by EBRD, European Bank, 07.11.2019
10 https://northvolt.com/production
11 https://www.envision-aesc.com/en/network.html
12 Samsung SDI expands its battery plant in Hungary, INSIDEEVs, 24.02.2020
13 Avec Verkor, la France compte un autre projet de giga-usine de batteries, Les échos, Lionel Steinman, 30.07.2020
14 La batterie Lithium-Ion, mieux comprendre pour mieux s’en servir, Amperes.be, 10.05.2017
15 La batterie à électrolyte solide : une révolution pour l’automobile, Les numériques, Erick Fontaine, 23.11.2017
16 Study on the Characteristics of a High Capacity Nickel Manganese Cobalt Oxide (NMC) Lithium-Ion Battery—An Experimental Investigation, www.mdpi.com/journal/energies, 29.08.2018
17 Oxygen Release and Its Effect on the Cycling Stability of LiNixMnyCozO2 (NMC) Cathode Materials for Li-Ion Batteries, Journal of The Electrochemical Society, 02.05.2017
18 A Mini-Review on Metal Recycling from Spent Lithium Ion Batteries, www.elsevier.com/ locate/eng
19 The recycling of Lithium-ion batteries, Ifri, 2020
28 Analyse coûts bénéfices des véhicules électriques – Les autobus et autocars, Service de l’économie, de l’évaluation et l’intégration du développement durable, octobre 2018
The COVID crisis is the largest global economic shock since the Second World War. As a result of the sanitary crisis, billions of people were confined to their homes leading to a sudden halt of the economy. Consequently, global trade came to a standstill, hundreds of millions lost their jobs and indebtedness has greatly increased. As economies begin to restart, it is becoming clear that the impacts of the crisis are not merely transitory.
Accuracy provided buy-side assistance to Schneider Electric in the context of the public takeover of German construction software company RIB Software.
In collaboration with:
Charles-Antoine Condomine (Manager), Marius Henault (Analyst),
Justine Schmit (Manager) and Vincent Thebault (Manager).
IN SUMMARY
It is widely recognised that a safe haven investment is one whose value does not fall during an economic or financial crisis. A safe haven investment is therefore a counter-cyclical investment in the sense that it is highly resistant to economic cycles and exhibits lower correlation to more risky asset classes.
Gold is often presented as the safe haven investment of choice. It has served as a store of value since ancient times. Its market price is not directly linked to changes in financial markets or the economic context, but this does not necessarily mean that its value will not fall, as was the case from 2012 to 2016 (figure 7).
Even if real estate is also often presented as a safe haven investment, it is worth investigating the reality of such a proposal. Indeed, real estate is often presented and discussed as a whole in the mainstream media. However, the term covers various asset classes that each follow their own logic and rationale. The current crisis is revealing the risk associated with various assets, and now is therefore an ideal time to discuss this notion.
In this article, we will examine the real estate market from two different perspectives:
A. First, we will analyse direct ownership of a real estate asset. We will note that the French prefer investing in real estate directly, as it is a reassuring asset class with an unrivalled (and even improving) balance between risk and reward over the long term.
B. We will then focus on mechanisms for indirect investment in real estate, such as shares in different types of real estate companies, developers and REITs1, with a special focus on this last type. These companies (whether listed or not) tend to outperform the market in the long term, but they are more risky than direct ownership of the underlying asset. This usually results from the high levels of debt that these companies often have, as well as the highly specific nature of their assets (housing, shopping centres, offices, warehouses, etc.).
As a conclusion, we will touch upon the impact of the current health crisis on the French real estate market. The market has slowed down considerably, with multiple risks weighing down on both real estate assets and stocks.
1. INVESTING IN BRICKS AND MORTAR, AN UNBEATABLE RISK–REWARD RATIO OVER THE PAST 30 YEARS
A. Real estate, an asset class appreciated by investors (and individuals)…
The French are fond of investing in real estate: for example, 65% of them owned their main residence in 2018, compared with 52% in Germany2. This proportion grew continuously between 1980 and 2010 and has since remained flat at this level. Further, real estate represents approximately 61% of the wealth of an average French household3.
Although the price of real estate can vary depending on different parameters (location, interest rate, economic context, demographics, etc.), several reasons can explain the preference of individuals and investors for this class of asset:
• Real estate assets are tangible, physical, material.
• They benefit from (i) a primary use value in the sense that they fulfil certain fundamental needs (for individuals as much as for businesses) and (ii) a high exchange value. These assets can easily be rented or sold, which makes them liquid in a market and therefore easily transferable. This represents a level of security for the owner, who is able to get rid of the asset quickly and relatively easily.
• Real estate lasts over time, with a longer operating cycle and investment horizon than the majority of other assets (long construction and occupation period (minimum 18 months, French law for commercial leases 3, 6, 9 or 12 years), etc.).
• Real estate assets are able to generate stable revenues over time.
For these reasons, the price of real estate does not correlate particularly highly with other asset classes such as shares or bonds. These differences make investing directly in real estate an attractive option when pursuing a strategy to diversify an investment portfolio. But beyond pure diversification purposes, investing directly in real estate in France can help investors to optimise their risk–reward ratio in their portfolios.
B. …and an unbeatable risk-reward ratio over the past 30 years
The return on a real estate asset can be received in two ways:
• Rent received (or saved for an individual in his or her main residence) from the letting of the asset
• The variation of the value of the real estate over time.
Based on data published by the Institut de l’Epargne Immobilière et Foncière (IEIF), over a period of 30 years, the return on a real estate asset has equalled that of the stock market but with a lower level of volatility (and therefore lower risk)4.
Figure 1: Average risk–reward ratio over 30 years (1988–2018) in France for different classes of asset
Source: IEIF (Institut de l’Epargne Immobilière et Foncière)
–
Figure 1 notes:
The return calculation is based on an entry price, an exit price and intermediate flows for unlisted assets and on annual performance with revenues reinvested for listed assets.
*Livret A corresponds to a type of instant savings account.
If we look at the development of the risk–reward ratio in the shorter term (five years in the figure below), we can see that real estate globally outperforms other classes of asset.
Figure 2: Average risk–reward ratio over five years (2013–2018) in France for different classes of asset
Source: IEIF
Over the period 2013–2018, a period of more limited volatility in the stock market, the return on real estate assets remains high for a limited risk. The stability in this performance can be explained by several factors, including:
• the scarcity effect of real estate in certain areas (land availability, lack of new housing construction, etc.);
• the appetite of the French for this asset class;
• strict financing conditions required by lending banks in France, based on solvability criteria versus real estate value. Such a mechanism limits downward trends in case of economic downturns;
• recent easier access to credit and a downward trend in interest rates.
The risk–reward ratio of real estate assets is therefore more attractive than that of other classes of asset, no matter their level of risk. It should be noted that unlisted real estate funds (SCPI, OPCI) perform in line with direct ownership. Only gold (which does not generate rent or dividends!) comes close to the return profile of a real estate asset, but it suffers from a more disadvantageous volatility profile due to a significant price sensitivity in terms of different tensions in the economy, movements further amplified by leveraged financial products.
C. Transaction volumes historically affected by crises but proving resilient
The characteristics of the real estate market have pushed French investors to invest heavily in this asset class.
Figure 3: Number of transactions and year-on-year change in prices of second-hand housing in France from Q1 2000 to Q3 2019
Source: CGEDD according to DGFiP (MEDOC) and notary databases
The number of conveyancing transactions for homes in France remained globally constant until the subprime crisis; it even grew between 2001 and 2003 despite the stock market crash following the bursting of the dot-com bubble and the 9/11 attacks in 2001.
The volume of transactions fell by approximately 30% between 2007 and 2008, as access to credit was restricted in this period. This fall in the number of transactions led to a decrease in the average price of real estate in France of 9% between Q1 2008 and Q1 2009.
The real estate market started to recover from 2009, with the massive quantitative easing programmes instigated by central banks and updated conditions for access to credit. Between spring 2009 and summer 2011, real estate prices in France grew by 12%5, and the cumulative volume of transactions over 12 months returned to its level from before the crisis, all this despite the ongoing eurozone crisis.
This level of transaction volumes continued growing significantly over the period, after a low point in 2012. This low point mostly derived from a slowdown in investment decisions made by individuals because of the economic uncertainty partially generated by the sovereign debt crisis in the eurozone.
Thus, we can see that over a long period the volume of transactions for residences can sometimes be affected by economic crises, but it tends to recover quickly. In parallel, the value of the underlying assets is resilient in times of crisis, notably because of the reluctance of real estate owners to reduce selling prices.
Direct ownership of real estate can therefore be considered a safe haven investment due to its lower volatility compared with the stock market. Direct ownership’s past performance can even be considered as a paradox and a golden era as its return on investment has been equal to (over a 30-year period) or even higher than (over a 5-year period) stocks. Such surprising performance can be explained by several factors, such as falling interest rates or the scarcity effect. In the next part, we will review indirect real estate ownership investment through ownership of REIT shares.
2. LISTED REAL ESTATE PLAYERS: FROM DEFENSIVE TO VOLATILE VALUES
A. Two large categories of shares for two different roles: developers and REITs
As a reminder, the real estate industry can be split into two main types of players:
• Developers, who build and then sell properties
• REITs, who invest in and then manage (or outsource management of) properties.
These two categories for the most part comprise listed international players operating in several countries – real estate provides investment opportunities all around the world. The largest market is in the USA, with over 220 listed real estate funds and numerous significant developers. By comparison, Europe has around 30 listed funds.
The IEIF Real Estate France index provides a view of the market performance of real estate players in France, composed of 13 REITs and five real estate developers listed in France (cf. appendices for breakdown). Over the period from 1991 to 2020, companies making up the index developed significantly, with for example several major mergers and acquisitions (Unibail and Rodamco in 2007, then Westfield in 2017, Klépierre and Corio in 2015). Furthermore, several French REITs are amongst the leaders in Europe.
B. Real estate stocks: from defensive to volatile values
As shown in the figure below, the IEIF index has outperformed the CAC 40 since the dot-com bubble burst in 2001–2003, with a rapid recovery following the subprime crisis of 2008–2009.
Figure 4: IEIF Real Estate France index vs CAC 40 until 31 December 2019 (base 100 as at 31/12/1990)
Source: IEIF, CapitalIQ
–
Figure 4 notes:
As REITs distribute a significant level of dividends, the indices have been given with the reinvestment of gross coupons.
In September 2000, the CAC 40 reached a significant peak of 6,944 points (excluding reinvested dividends), the culmination of a period of strong growth since the middle of the 1990s. The bursting of the dot-com bubble followed, then the 9/11 attacks in 2001, leading to an almost 65% decrease in the value of the CAC 40 in two and half years. During this crisis – and even though the CAC 40 fall was accentuated by the weight of banks in it – REIT share values, in contrast to the CAC 40, continued to climb (even faster than gold). Over this period, investments in listed real-estate companies acted like safe haven investments, with prices uncorrelated to market developments.
Conversely, after the dot-com crisis and the 9/11 attacks, namely in around 2003, REIT market values correlated much more closely with the development of the rest of the market. This change in the behaviour of REIT share values results from a certain number of structural factors detailed below.
Risk–reward ratio of real estate stocks vs direct ownership of real estate assets
The figures below consider the REITs included in the IEIF Real Estate France index, in the analysis of risk–reward presented previously6.
Figure 5: Risk–reward ratio over 15 years in France – REITs vs other classes of asset, 2003–2018
Source: IEIF (Institut de l’Epargne Immobilière et Foncière), Accuracy analysis
–
Figure 5 notes:
The periods 1991–2003 and 2003–2018 are presented for the stock exchange index and REITs.
As previously observed, between 1991 and 2003, REIT shares constituted low-risk assets, exhibiting for that matter a reward profile below that of the market. Conversely, the period 2003–2018 shows a change in behaviour for these assets, becoming more volatile with a greater reward profile.
Further, we can observe the difference in volatility between the real estate assets (direct ownership of real estate assets) and the shares of the real estate companies. This can mainly be explained by the level of financial leverage of these companies (figure 10).
Several elements can be put forward to explain the shift in the profile of real estate stocks towards greater volatility:
• Growth of financial leverage: REITs saw their debt ratio increase from 0.6 in 2005 to close to 1 from 20187.
• Underlying real estate prices grew significantly in the 2000s, thereby increasing the value of REIT portfolios and leading to significant growth of their corresponding share prices.
• The dividend distribution rate grew from 0.6% in 2005 to 6.4% in 2018 (see appendix, page 12)8.
• A tax incentive scheme (SIIC regime in 2003) was introduced in France creating a structural surplus value in this sector, even if, for several years now, the market net asset values of REITs (mainly based on the gross market values of the underlying assets) have been significantly below their share market value, notably for retail REITs (e-commerce impact).
• New high-performing niches have developed (e.g. logistics with the growth of e-commerce).
• There was a consolidation trend in the market, with several significant mergers as previously mentioned.
The strong performance of the real estate market also pushed numerous institutional players to increase the percentage of their investment in the sector, or even to create their own real estate funds, such as Axa, Amundi and BNP Paribas. However, when it comes to retail, performance has recently been affected by the expansion of e-commerce, further accelerated by the COVID-19 crisis (see below).
Due to several factors (including higher leverage and dividend yields), the profile of REIT stocks has changed: formerly defensive and resilient to economic cycles, they have now become volatile, yield assets.
3. THE COVID-19 CRISIS: A LEAP INTO THE UNKNOWN
We have not seen the like of the current crisis since 1929, either in terms of its nature or in terms of its current and future impact on the economy and the markets. Stock markets globally lost almost a third of their value in less than a month and numerous economies shut down as a result of the lockdown of several billion people across the world.
Listed real estate companies have not been spared: the IEIF Real Estate France index has been affected even more than the CAC 40 since the start of the year, as shown in the figure below.
This is notably the consequence of more volatile stocks (as explained previously), in particular the impact of the previously unforeseen closure of shopping centres. It is, however, too soon to conclude on any kind of development in the valuations of REITs and real estate developers.
Figure 6: IEIF Real Estate France index vs CAC 40, base 100 as at 31/12/2019
Source: IEIF, CapitalIQ
Two major risks are weighing down on the real estate market (both direct and indirect ownership):
• A restricted availability of credit: as was the case during the subprime crisis, banks may restrict access to credit for a certain period, reducing investors’ borrowing capacities or removing them from the credit market altogether.
• Business failures: business failures would lead to an imbalance between supply and demand in the professional real estate market.
As observed in the past and if the conditions for obtaining mortgages are favourable, the safe haven status of direct ownership investment real estate should make it possible for real estate transactions to pick up again and for asset values to remain stable or suffer only a very limited decrease.
However, market players – both REITs and developers – are exposed to far greater risks.
The lingering risk for REITs in the short and medium term relates mostly to rental risk. As their portfolios are largely composed of offices and retail properties, the health of the economy as a whole will have a direct impact on their level of risk.
Almost all retailers had to close for a certain period of time during the lockdown. The recoverability risk and the level of rent during and after the lockdown period is therefore very significant. This is all the more true as rent (and service charges) represent a very high fixed cost for retailers.
We are thus seeing an increase in the number of requests from retailers for the outright cancellation of rent during shutdown periods and the full variability of rent on the basis of revenue for a period to be defined (generally the time it takes to regain the pre-crisis level of activity).
In the longer term, this trend may end up calling into question the standard long-term commercial lease in France, with the aim (for retailers) of making lessors bear more of any potential operating risk.
As for real estate developers, the risks are more limited at this stage and mostly concern delays in construction, the slowdown in the commercialisation of certain assets, and changes to buyers’ needs.
Indeed, the massive use of remote working during the lockdown and the proof of its success may well lead the main users of office spaces to rethink their use of – and even their need for – their headquarters and other buildings.
CONCLUSION
Direct ownership of real estate clearly presents the characteristics of a safe haven investment (limited risk, certain return, uncorrelated with economic cycles, etc.); indeed, it has demonstrated this fact historically. Paradoxically, it has also demonstrated historically that it is at least as profitable as stocks for a more advantageous risk profile. We note, however, that these characteristics depend heavily on the development of interest rates and access to credit policies put in place by financial institutions.
Conversely and since approximately 2003, listed developers and REITs have become offensive stocks that outperform and are more volatile than the market. The new risk–reward ratio associated with them can be justified through various parameters: the sometimes high level of leverage of these players, an accommodating dividend distribution policy, the economic performance of the underlying asset and the implementation of tax incentives, notably.
The economic crisis resulting from the COVID-19 health crisis has had a direct impact on the real estate market. In the short term, the hardening of conditions to obtain credit should, in theory, automatically lead to a reduction in transaction volumes, leading to deflationary pressure on prices. In the long term, we are witnessing the calling into question of the use value of certain assets (commercial, office, urban residential real estate, in particular). If this paradigm shift were to become structural, it would be a departure from the long development cycle analysed above and would necessarily result in a decline in financial value.
Appendices
Figure 7: Price of gold vs CAC 40 from 01/01/1980 to 21/07/2020
Source: CapitalIQ
Figure 8: IEIF Real Estate France index – Weighting as at 21/07/2020
Source: IEIF
Figure 9: REITs and real estate companies vs CAC 40, base 100 as at 31/12/2019
2 Proportion of real estate owners within the French and German populations 2009–2018, Statista
3 Revenues and wealth of households – 2018 edition, INSEE
4 The volatility formula used by the Institut de l’Epargne Immobilière et Foncière is not shared but it is generally accepted that it is calculated as an annualised standard deviation calculation.
The social and environmental issues that we currently face constitute major challenges that raise questions for all of us. When it comes to these issues, we no longer have a choice: we need to find innovative solutions to tackle them.
Today, philanthropists are no longer the only ones addressing social and environmental issues; financial investors are now fully fledged players in this challenge.
Why? Because it has now been proved possible, and not at all contradictory, to combine social and economic performance in an investment. Indeed, key players in the public and private sectors have seized the opportunity offered by impact investments. They see impact investing as a major tool to catalyse the change of scale needed for innovative responses to fundamental issues.
This paper is aimed at providing a first insight as to how stakeholders (regulators, investors and social entrepreneurs) are working together to create a professional ecosystem looking to improve society while generating financial returns.
Over the next episodes of this series will deep dive on the main sectors, the challenges of social impact measurement and the financial innovations related to this market.
A. Impact investing refers to investments with an intentional positive impact on people and/or the environment, taking place via sustainable and profitable initiatives and subject to evolving impact measurement.
B. In 2018, this market was estimated at $502 billion, a market eight times larger in just five years.
C. This growth has been mainly driven by (i) growing social consciousness in our society regarding environmental issues, (ii) new and traditional investors searching for ways to contribute to society in order to complement limited public spending to address these problems alone and (iii) growing interest for responsible and economically sustainable business models.
D. However, this sector is facing lots of new challenges such as how to appropriately measure social impact, the necessity for a legislative framework and the constant need for innovative solutions.
1. THE CURRENT IMPACT INVESTING MARKET
A. Impact investing: much more than just a trend
Impact investing is undoubtedly a concept with various and evolving definitions. The Rockefeller Foundation used the term for the first time in 2007 during a conference organised to evaluate the possibility of developing a type of investment with a social and environmental
impact.
The Global Impact Investing Network (GIIN) defines impact investing as an investment that explicitly combines social return and financial return. More concretely, it refers to investments with an intentional positive impact on people and/or the environment, taking place in sustainable and profitable businesses and subject to proactive impact measurement before, during and after the investment (greenhouse gas emissions avoided, number of jobs created, etc.). This type of investment aims to break the usual practices of financial players who used to either give money away without any expectation of financial return (“doing good”) or invest money as well as possible to maximise financial returns (“doing well”). Impact investing is therefore doing good and doing well at the same time.
It is essential to disassociate impact investing from socially responsible investments. The purpose of the latter is to exclude from investment portfolios certain companies that may be less virtuous than others or to choose, in each sector, the “best in class” according to extra financial criteria. Unlike socially responsible investments, impact investing concerns companies where investors prioritise not so much “return” but “meaning”. It is necessary to understand the differences between those companies that claim to be “green” because they apply environmental, social and governance filters when they screen financial deals and those companies whose true ambition is to create positive social and environmental impacts.
Spectrum of business based on social impact and financial return
Source: “Financial performance of impact investment” market study 2017-2018 and Accuracy analysis
B. A growing $502 billion financial market
The impact investment market has grown significantly in the last decade. In terms of assets under management, GIIN estimated a market of $60 billion in this sector in 20141. This figure increased to $228 billion in 20172 and $502 billion3 in 2018, a market eight times larger in just five years.
The graph below highlights the diversification of the investments in this sector. Impact investors have different asset allocation strategies, investing across geographies, sectors and types of instruments.
Assets under management (“AUM”) by geography, sector and instrument
Source: GIIN 2019 Annual Impact Investor Survey and Accuracy analysis
Regarding return on investment, contrary to the general belief that impact funds generally underperform traditional investment funds and do not necessarily seek market returns, GIIN published a study “Global Impact Investing Network – 2019” whereby it shows that 66% of impact funds seek market-rate returns. Moreover, 77% of the funds performed in line with their expected financial return and 14% outperformed expectations4.
In Europe, the UK has pioneered in impact investing benefitting from an innovative and growing market since the 2000s thanks to numerous political and regulatory initiatives, followed by Northern European countries, like Sweden, and Western European countries like France, the Netherlands and Switzerland.
Expectations vs. performance
Source: GIIN 2019 Annual Impact Investor Survey and Accuracy analysis
C. Consciousness + opportunity + search for meaning: three key growth drivers
The question that now comes to mind is the following: why is this market growing today? Our analysis demonstrates that two worlds are converging. First, the world of finance, which has experienced many setbacks since 2007, is seeking to include ethical considerations in its investments; it is discovering the impact investment as a business model capable of generating social wealth with more secure longterm profitability. Second, the world of social entrepreneurs, who are now able to generate profitability and a strong social impact, wishes to free itself of public finance (which is more and more limited). This convergence has been driven by several factors.
• First, society is now more conscious that economic growth aspirations are outpacing global resource supplies. Currently, we consume 1.7 planets’ worth of resources5 and, consequently, we are facing challenges such as climate change, resource depletion, biodiversity loss, consumption patterns, pollution and waste management, supply chain fairness and widening income inequality. The United Nations explained that significant resources are required to address these challenges. The Sustainable Development Goals (SDG) gap provides opportunities for private capital to supplement public funds.
• Second, as proved by the United Nations, social public spending is no longer growing as fast as is necessary. Charitable donations and public aid are no longer sufficient to tackle global social issues.
• Third, world leading investment firms are increasingly attracted to responsible business models and socially responsible returns. Not only does this add another layer of motivation to their staff but it also provides new business opportunities. These players are able to bring an outstanding talent pool to assist in the development of new opportunities with a long-term view on sustainability and social impact. The development of a more socially responsible form of finance is also a response to the excesses of the financial sector highlighted by the 2008 crisis.
Estimated capital requirement – Potential private sector contribution (USD trillion)
Source: World Investment Report United Nations Conference on trade and development 2014 and Accuracy analysis
2. THE FUTURE OF THIS GROWING MARKET
A. Innovate, finance and support – the new players in the game
The growth of this sector has created a new innovative and large ecosystem that includes more and more players and solutions. These new players are helping to put in place the tools that promote a more efficient functioning of this market. This new ecosystem is mainly composed of entrepreneurs, investors and support structures.
Previously considered as a “niche” for a few specific social investors, impact investing is now a new market for “traditional” investment funds. For example, a leading global investment firm announced in February 2020 the closing of its Global Impact Fund at $1.3 billion. This fund is dedicated to investments in companies with business models aimed at providing solutions to environmental or social challenges. In the same month, another leading investment fund announced its intentions to direct its investments towards companies committed to impact investing.
In addition, the EU has reached an agreement to establish European rules defining sustainable investments. This new framework (due to be in place by 2021) would give a “green” label for investments that cover renewable activities; it would grant lower labels for investments that are not fully renewable but help to reduce CO2 emissions.
Impact Investing ecosystem
Source: Accuracy analysis
B. The main challenges for a better future
What challenges does impact investing face? The first is the impact measure itself. Due to its specific characteristics, financial players must adopt a new logic: the objective is to generate returns on financial investment while obtaining the greatest possible social impact. One of the key challenges for this is to measure social impacts in the same way financial impacts are measured, that is with some form of tangible metric. Several options are currently being explored in order to measure the “Social Return On Investment” (SROI).
The second challenge relates to the legislative framework and public incentives. The legislative framework should be adapted to allow and enhance the development of impact investing worldwide. One good example of this was introduced in the UK in April 2014 with the Social Investment Tax Relief (SITR), a tax reduction of up to 30% of the investment for individuals who invest in small social enterprises.
The third challenge relates to completing the funding chain of social companies. Like traditional investment funds, the majority of impact funds choose to finance companies either in the growth stage or to finance mature companies. Indeed, mature companies represent approximately 55% of assets under management and companies in the growth stage represent 34%, but companies in the venture and seed stages represent only 9% and 3% respectively 7. Therefore, the ecosystem needs to complete the social entrepreneurship funding chain so that appropriate funds are available at all the key development stages of a social enterprise.
The fourth challenge concerns the financial world’s need to continue innovating in order to find available liquidities for impact investment players. An example of these are Social Impact Bonds (or “Pay for Success Bonds”) which pursue the objective of raising private capital to finance public social actions. Social Impact Bonds intend to fund activities, which, if not carried out, will result in future costs for public authorities. The first Social Impact Bonds were launched in the UK. The project goal was to provide re-entry services to prisoners leaving prison. The bond is remunerated on the basis of a fixed and clear objective: to reduce offense recurrence by 10%. If the objectives are achieved, public authorities reimburse investors with capital, at a specific rate computed based on the savings made by the public authorities thanks to the bonds. However, if the objectives are not achieved, private investors lose the investment.
CONCLUSION
All stakeholders in this innovative ecosystem have a role to play in promoting this new dynamic investment market. Financial intermediaries like banks, financial players, financial and strategic advisors and accounting firms, must also better understand how impact investing can become a vector of progress.
Companies focused on impact investing are clearly here to stay. They aim to have an impact on our society and at the same time earn a return for their investors. However, the road ahead is long and there are many challenges to establish a common framework to enhance this blooming ecosystem. The current pandemic will increase social issues. It will also force countries to rethink and redesign how we define the economy and how we assess value. Now is the time to capitalise on this sector in order to build markets that ensure sustainable growth for all. Now is the time to do “well” by doing “good”!
1 The Impact Investor Survey – Global Impact Investing Network and J.P. Morgan – May-2015 2 Annual Impact Investor survey – Global Impact Investing Network – 2018 3 Sizing the impact investing market – Global Impact Investing Network – April 2019 4 Annual impact investor survey 2019 – Global Impact Investing Network – 2019 5 Ecological Footprint — The Global Footprint Network 6 Financed by public and private investment.
Accuracy advised Protect Medical, the holding company set up by Borromin Capital Fund IV in June 2019 to acquire Söhngen Group (Söhngen), on the acquisition of Spencer Italia S.r.l. (Spencer) on 17th April 2020. With this, Protect Medical is following its strategy to build a leading European first aid and EMS (Emergency Medical Services) full-service provider through organic growth and acquisition.
Great Place To Work® has announced the results for the 2020 Best Workplaces in France awards. For the 12th year in a row, Accuracy is among the top-ranked participants, taking fourth place in the 50–500 employees category.
Accuracy conducted buy-side due diligence for La Caisse de Dépôts et Consignations in the context of the acquisition of investment stakes held by l’Agence des Participations de l’Etat and La Banque Postale in the Société de Financement Local (SFIL), ex-Dexia Crédit Local.
Episode 1:
How do business support structures enable value creation in France?
Episode 2:
Business support structures looking for a new role and new models
IN SUMMARY
At a time when innovation is increasingly becoming the driving force behind all economies, start-ups find themselves on the front line thanks to their simple and agile structures that enable them to venture into the most promising sectors.
However, innovation cannot develop successfully without a complete and adapted ecosystem, bringing together all the players that must interact and join forces to bring innovative projects to life (organisations, companies, start-ups, universities, investors).
This article details the dynamics that are forging the innovation ecosystem in Morocco, whether related to national strategies or private initiatives. Our mapping of support structures makes it possible to assess strategic issues in particular, for any country looking to take full advantage of its potential for talent and entrepreneurship.
This is key as innovation constitutes a critical lever for the economic growth and development of a country.
A. Support structures for innovative start-ups have multiplied in Morocco thanks to political and private initiatives.
B. Nonetheless, their presence in Morocco remains uneven and concentrated in the Casablanca region.
C. Large Moroccan companies are progressively contributing to the innovation ecosystem and are starting to use open innovation as a value creation lever.
D. Moroccan innovation, measured in terms of patent applications and start-up fundraising, is still not achieving its full potential.
1. INNOVATION SUPPORT STRUCTURES IN MOROCCO WHAT DOES THE CURRENT INNOVATION LANDSCAPE IN MOROCCO LOOK LIKE?
A. A growing number of structures
Support structures are composed of both physical and non-physical structures. They include (i) incubators and accelerators, (ii) co- working spaces, (iii) support programmes and (iv) financing programmes. Based on our research, there are 74 active and planned support structures in the country.
Among them, Technopark was the pioneer and now constitutes a textbook case. Created in 2001 as the fruit of a public–private partnership, Technopark is managed by the MITC (Moroccan Information Technopark Company), whose founding shareholders are the Moroccan state (35%), the Caisse de Dépôt et de Gestion (17.5%) and Moroccan banks (47.5%). The MITC offers work spaces and supports startups by allowing them to benefit from its privileged ecosystem. The model was duplicated in Rabat in 2012 and Tangier in 2015 and will soon be duplicated in Agadir (opening planned in 2021). Technopark has supported over 1,100 companies since its creation, particularly in the information and communication technology (ICT), green technology and cultural industry sectors. It is well known that start-ups require financial support and specialised assistance. But the need to integrate them into a community to interact and exchange ideas is just as vital. Indeed, a start-up community represents a rich and diverse source of collective intelligence, which enables start-ups to discuss ideas in co-working spaces, exchange best practices and build a network to develop.
New structures have developed in Morocco with this very thinking in mind, offering support, training and mentoring services. These support structures organise different events like hackathons, where various teams (composed of developers and project leaders) must find the solution to a strategic issue by producing a proof of concept (in general, software or an application) in a very short space of time. In December 2019, Emerging Business Factory organised the first ‘water hackathon’ in Marrakech with the aim of making water use in the area sustainable and eco-responsible.
Other support structures have implemented co-working spaces for all those who wish to launch their entrepreneurial projects and are looking for a community of partners. One such example is New Work Lab, created in Casablanca in 2013. It is a space dedicated to the development of Moroccan start-ups through the organisation of meetings, training and the provision of a co-working space.
Mapping of start-up support structures
B. An uneven geographic split
Although support structures are concentrated primarily in and around Casablanca, regional dynamics resulting from a strong political will take shape through:
• the duplication of Technopark in other cities in the country;
• regional development projects, such as the innovation city of the Souss-Massa region, which plans to make R&D laboratories available to start-ups, or Mazagan’s urban hub, developed by the OCP and the government;
• support mechanisms on a national scale, with for example the Réseau Entreprendre Maroc and Injaz Al-Maghrib, which support start- ups, or even the financing programme Fonds Innov Invest.
However, the support offered in some large Moroccan cities, like Fes and Meknes for example, is far below the needs of their large student populations.
At the start of the school year in 2017, the Euromed University of Fes (UEMF) had over 1,300 students and researchers2, suggesting a potential talent and entrepreneur pool that should not be neglected.
C. Mostly generalist structures supported by a wide range of sponsors
Though the vast majority (75.7%) of support structures are generalist, three specialisations stand out:
• ICT, in particular thanks to the rise of fintechs working with corporates (e.g. StartOn, Fintech Challenge);
• green technology, with Morocco having set itself the target of reducing its energy dependence and investing in renewable energies (e.g. Social Green Tech Bidaya);
• the social and solidarity economy, relying on, for example, sport to create a link between youth employment and the entrepreneurial spirit (e.g. TIBU Maroc).
It is interesting to note that the sponsors of support structures are diverse: 57% of support structures are backed by at least two organisations (assistance, financial support, etc.). Further, 32% of these structures come from public-private partnerships. Entrepreneurial support initiatives thus form part of a collective intelligence approach, a pooling of resources between complementary players – in short, open innovation.
2. THE GROWING INVOLVEMENT OF LARGE COMPANIES
HOW ARE LARGE MOROCCAN COMPANIES TAKING HOLD OF INNOVATION?
A. OCP: a heavyweight in the national economy and a global innovation model
Moroccan companies are gradually incorporating open innovation and digitalisation into their organisations in addition to increasing their employees’ awareness of innovation culture. A good example of this can be found in the OCP group.
OCP is the world leader in phosphates and the leading industrial company in Morocco. It has put in place an ambitious investment programme (2008-2027), where it aims to double its mining capacity and triple its transformation capacity.
However, where it is of particular interest is in its efforts to boost innovation. Indeed, it has initiated several projects to boost innovation in Morocco and within the group. In addition to physical support structures, numerous programmes have been implemented, such as the Seed- Stars Startup Competition or the Impulse acceleration programme in partnership with Mass Challenge, as detailed below.
The university environment gives us access to innumerable research centres across the world and open innovation […]
When we are at the university, we are able to have a different type of dialogue, one that is much more productive
Mohamed Soual, chef economist at OCP.
B. The growing involvement of Moroccan banks
Moroccan banks are not to be outdone in this matter. Attijariwafa Bank and BMCE Bank of Africa were pioneers in 2001 by financing Technopark Casablanca. They have been highly active in the promotion of innovation over the past five years.
Moroccan banks’ initiatives in favour of entrepreneurs have naturally led them to turn their innovation approach inwards to improve their own processes and offers in the context of increasing digitalisation. But the delegation of support management to a pure player is often essential in order to facilitate cooperation and maximise value creation between stakeholders. This is particularly the case when the stakeholders have very different cultures, notably when it comes to public-private partnerships.
Though Morocco’s start-up ecosystem has been strengthened by the launch of various support and financing mechanisms (as detailed below), the number of innovative technology companies in the country measured in terms of patents and fundraising is not meeting its potential, as detailed in the following pages.
Management of support structures created by Moroccan companies on the Moroccan All Shares Index (MASI)
3. INNOVATION AND ITS FINANCING IN MOROCCO
HOW HAS INNOVATION DEVELOPPED IN MOROCCO OVER THE PAST FIFTEEN YEARS?
A. Successive industrialisation strategies have contributed to raising the general level of innovation
Comprehensively evaluating the innovative nature of a country requires consideration of its institutional environment, infrastructure, training, R&D, and market structure and creation. The Global Innovation Index 2019 ranks Morocco 74th out of 126 countries based on 80 variables ranging from ease of obtaining credit to the protection of minority interests in a company. This index also distinguishes between input variables that define a country’s potential for innovation and output variables that measure effective innovation.
Our analyses here focus on the two output criteria that seemed the most tangible: research dynamism, measured through the number of patent filing requests (industry driver), and fundraising for technology and digital start-ups, which testifies to the potential for economic development. Reviewing the development of these variables against the backdrop of successive industrialisation plans implemented by the Ministry of Industry, Trade, and the Green and Digital Economy since the mid-2000s highlights a correlation. As shown in the figure elow, thanks to industrial policies, as well as the country’s stability and closeness to the European Union, Morocco has become a top destination for foreign investors.
Since 2005, three major industrial strategies have succeeded each other, with a substantial effect on the development of the number of patents filed. Nevertheless, these effects seem to differ based on the nature of the players considered. Indeed, patents filed by non-residents tripled between 2014 and 2018, whilst those filed by Moroccan residents almost halved over the same period.
The dynamism of national research seems to be losing momentum and remains mostly the domain of universities (58% in 2018), with Moroccan companies only requesting 9% of patent applications.
At the same time, the significant growth in filings from abroad testifies to the increased appeal of the country. This can be explained by two factors. First, the presence of foreign actors in Morocco has intensified across various sectors, like the automotive or aeronautics sectors, following the Industrial Acceleration Plan. Second, the implementation of a new way of filing patents by the European Patent Office, thanks a partnership with the Ministry of Industry, Trade, and the Green and Digital Economy in 2015, enables those filing patents in the European Union to also request patent protection in Morocco. Thus, the USA (20%) and European countries, with France and Germany in the lead (8% each), are the most represented among the countries of origin of those filing patents.
Patent filing requests in Morocco (2005–2018) and industrialisation strategies
Note: We have analysed the development of the number of patent applications because of the ease of access to this data. Though this makes it possible to have a vision of the effects of the successive industrial policies, it does not make it possible to assess the entirety of their effect
B. But the funds raised by start-ups remain modest compared with other countries in the region
Fundraising constitutes another indicator of the dynamism of the innovation sector. Though it is difficult to establish a causal link between fundraising and the practice of filing patents, these two phenomena constitute complementary indicators of the dynamism of innovation in the countries concerned.
The spread of fundraising practices reflects, in particular, the growing participation of national and international private players in innovation financing. However, the small amounts concerned tend to show the predominance of public capital in innovation financing or indeed the internalisation of innovation by existing companies.
In terms of fundraising, Morocco places 12th in Africa in 2019 with USD 7 million raised by technology and digital start-ups (vs USD 3 million in 2018 corresponding to 15th place)3. We have gathered data to compare the situations in Algeria, Tunisia, Nigeria, Kenya and Egypt with that of Morocco. The differences can be explained by various factors such as access to financing, financing raised in other countries or the use of alternative eans of financing. In particular, we have put the amounts raised in perspective by setting them against the respective GDPs of each country. Finally, for all these countries except Algeria (for which we do not have enough data), we have studied the development of the amounts raised between 2018 and 2019.
Generally speaking, we note an increase in fundraising over these two years: Morocco, Kenya, Nigeria, Egypt and Tunisia all experience a significant increase in the amounts collected. As for the amounts themselves, Kenya and Nigeria stand out clearly from the other countries. The case of Nigeria may be explained simply by the size of the country’s economy (USD 368 billion in 2018); the case of Kenya, however, is different (USD 87 billion in 2018). Indeed, Kenya proves to be fertile ground for the development of startups. Widely distributed low-cost internet and the digitalisation of payments in 2007 with the launch of M-Pesa, a mobile phone money-transfer system, have greatly facilitated transactions and have been a boon to entrepreneurs in the country.
By way of comparison, the amounts raised in Morocco seem low in relation to the country’s GDP (USD 120 billion in 2018). Beyond the difference in the size of the economies considered, there may be various reasons for this result: fewer private initiatives due to an economy structured around rent-based activities or in low-risk sectors, insufficient tax incentives for both entrepreneurs and investors or even the less prevalent practice of fundraising.
The lower access to fundraising in Morocco compared with other African countries (Egypt, Nigeria, Kenya, Rwanda, etc.) can also be explained by the language. In contrast to the francophone Morocco, the other countries mentioned are anglophone, meaning they are more easily able to capture foreign capital (particularly coming from the US and the UK).
These figures can be used on a macroeconomic scale to measure trends, such as the opening of certain economies to foreign capital, but they also reveal the level of appropriation of certain best practices by local players. In this context, public policies can play a facilitating role in a ‘top–down’ approach. Nevertheless, local realities should not be ignored. Indeed, beyond public policies, it is the players’ actions and the quality of their interactions that enable them to create together innovative programmes and determine the dynamism of a sector. OCP, as mentioned in part 2, is a prime example of this and highlights the importance of involving all players in the implementation of an innovation ecosystem.
Thus, using best practices inspired by foreign countries could strengthen local ecosystems. These measures would be of such a nature as to enable the realisation of the innovation potential of a country like Morocco by promoting the rise of start-ups.
Development of fundraising between 2018 and 2019
Source: Partech (fundraising) and World Bank (GDP)
Si ces chiffres peuvent être utilisés à l’échelle macroéconomique pour mesurer des tendances telle que l’ouverture de certaines économies aux capitaux extérieurs, ils révèlent aussi l’état d’appropriation de certaines bonnes pratiques par les acteurs locaux. Dans ce contexte, les politiques publiques peuvent jouer le rôle de facilitateur dans une approche « top-down ». Néanmoins les réalités locales ne oivent pas être ignorées. En effet, au-delà des politiques publiques, ce sont les jeux d’acteurs et la qualité des interactions qui permettent la co-construction de programmes innovants et déterminent le dynamisme d’un secteur. L’exemple de l’OCP abordé dans la partie 2 est à e titre évocateur et souligne l’importance d’impliquer l’ensemble des acteurs dans la mise en place d’un écosystème d’innovation.
Ainsi, la diffusion des meilleures pratiques inspirées des pays étrangers pourrait renforcer les écosystèmes locaux. De telles mesures seraient de nature à permettre la réalisation du potentiel d’innovation de pays comme le Maroc, en favorisant l’essor des start-ups.
1 Relationship between the number of existing support structures in the region and all existing support structures in Morocco
2 L’économiste.com, Edition n°5032, 2017, ‘Fès-UEMF : Une université à la fine pointe de la technologie’
3 Partech, 2019 Africa Tech Venture Capital report, page 13 (released in January 2020)
Note on methodology: Partech, a venture capital fund, publishes the ranking of fundraising rounds in Africa annually
The start-ups taken into consideration must fulfil the following criteria: (i) the start-ups are tech and / or digital, (ii) their market is Africa (in both
operations and revenues) and (iii) the funds raised exceed USD 200,000
Congratulations to our client Gimv for the acquisition of Köberl Group! The group is one of the leading full-service providers of facility management and technical building services in the southern German market. Accuracy provided financial due diligence services and SPA advice.
The 2020 results of banks in France largely support the long-term trends in retail banking in the country.
With this in mind, it should prove interesting to analyse the figures over the past five years to evaluate the impact of the breakdowns at work. This will help to understand the marked decrease in the relative weight of retail banking in the results of the six largest French banks, whether it makes up a significant proportion of their income (Mutualistes, Banque Postale) or a much smaller one (BNP Paribas, Société Générale).
The fall in revenues of around 1 % per annum for all banks combined is the main driver of these developments. Within net banking income (NBI), it is of course the interest margin that is declining, partly because of low interest rates, and partly because of commercial
practices.
This decrease in margin materialises in a much higher decrease in gross operating profit in retail banking, with a fall of 5 % per annum since 2014. As the cost of risk is decreasing significantly, however, the fall in net profit is limited.
In addition to this macroeconomic context, French retail banks are suffering from the specificities and practices of the French market.
Mortgages, for example, as the “harpoon” product of the customer relationship, have always generated particularly low margins for banks in France. This proved to be highly detrimental during the waves of early repayments from 2015 to 2018 and ended up costing the system several billion euros in NBI for individual gains in market share that were practically non-existent.
To compensate, the banks have all sought to expand their outstanding amounts: those linked to mortgages have increased by 28 % since 2014, passing from €833 billion to €1.071 trillion. Questioning the profitability of this choice is all the more pertinent given that French banks often use brokers for mortgages (40 % of volumes), despite some of the densest agency networks in Europe.
The other major supply of credit, consumer credit, generates structurally higher margins. The outstanding amounts are approximately five times smaller than for mortgages (€188 billion in September 2019), and the market is dominated by the specialised entities of BNP Paribas, Crédit Agricole and Crédit Mutuel (80 % market share between them).
But consumer credit has seen growth rates of 3 % per annum since 2014, and it forms a major part of the strategic plans of all banks. It regularly sees innovative new products, such as the recent split payments innovation, and competition is expected to grow in this business area in the years to come.
In terms of savings, the two regulated products, Livret A and PEL, which are specific to the French market (but correspond approximately to instant savings and home ownership savings accounts), represent over €540 billion in savings at the end of September 2019. The rates that they offer, however, are more of a hindrance than a help to French banks in the current economic context.
Indeed, home ownership savings represent a specific difficulty for retail banks, with its even higher interest rate. It has passed from 2.5 % to 1 % since 2014, but this has not stopped the outstanding amounts from climbing €60 billion over the same period, putting a further strain on the NBI of the banks that collect them, with an average rate of 2.65 %. Here again, the banks do not all follow the same policy.
Therefore, whilst the six main French banks suffer in varying degrees when it comes to retail banking, they do not all have the same strategies. Those banks with the strongest networks may be able to choose between products and volumes, but those with less extensive networks must further diversify and more closely monitor the profitability of each activity.
The symptoms may vary, but for banks to get back their profitability, the remedies that they must employ are probably the same: continue to better segment and personalise offers, and invest so as not to be left behind by neobanks in terms of customer experience. They might also choose a different way of functioning for mortgages. This already seems to be the case since the beginning of the year.
If innovation is a strategic issue both on a ‘macro’ scale in terms of national economies and on a ‘micro’ scale in terms of the businesses involved, then so is its financing. This financing relies heavily on business support structures.
In France, the first support structures aimed to provide an outlet for public research, but the rise of private structures has come hand in hand with a growing awareness of the need for profitability. As a result, though the number of business support structures continues growing, it is no longer uncommon to see some placed under compulsory liquidation (Ekito, 33 Entrepreneurs) or required to change course (Numa, Usine JO).
A viable business model is difficult to achieve when providing services only aimed at start-ups. Fundraising has therefore become an entirely separate source of revenue for business support structures, in a context of heightened competition between them. Moreover, a structure’s ability to support fundraising – measured by the number rounds undertaken as well as by the amounts raised – has become an indicator of performance, sometimes somewhat reductively.
At the same time, the way in which traditional fundraising works is being called into question by the rise of new mechanisms, such as crowdfunding or the use of blockchain and, more generally, by significant societal developments. It is important to assess both the potential and the limits of these new tools and means of financing, as well as the perspectives that they open up to financing players, first among which are the business support structures.
A. Business support structures play an important part in the financing of innovation in France.
B. Different support structure models offer different approaches when supporting fundraising.
C. Although continuously growing, traditional fundraising measures come with a number of significant limitations for start-ups.
D. On a global scale, the Initial Coin Offering (ICO) phenomenon represents an attempt to reinvent fundraising, calling into question in particular the role of traditional players such as investment funds and business support structures.
E. However, taking advantage of the limitations of ICOs, new practices are already coming to the fore; they present new opportunities for innovation players in France.
1. INNOVATION FINANCING IN FRANCE: THE ROLE OF BUSINESS SUPPORT STRUCTURES
A. The substantial growth of venture capital activities in recent years has considerably strengthened the role of business support structures
Venture capital activities – the financing of risky companies with strong potential – have grown significantly in France since 2015 (almost 30% per annum on average for fundraising over the period 2015–2019). In 2019, over €5 billion was invested in these types of operation.
This phenomenon should be viewed against the backdrop of the growing number of start-ups over the past decade, which has brought with it the development of the business support structure market. These business support structures have also seen their number skyrocket, and over 700 communes in France have at least one.
Business support structures are generally the first to assist companies in their fundraising. Indeed, 74% of total venture capital investments in France are used to finance incubated start-ups1. Further, the incubation of a start-up by a business support structure maximises its chances of succeeding in its fundraising. This occurs because (i) the fact of being incubated sends a sign to potential investors and (ii) the business support structure makes it easier to connect young businesses with investors, in addition to providing them with the knowledge necessary to undertake such operations.
Start-up fundraising in France by year [€bn]
Sources: EY 2019 barometer of venture capital, Accuracy analyses
B. Essential support in advance of and throughout the fundraising process
Beyond purely belonging to an ecosystem – a source of value in itself for all stakeholders – there are three major benefits when an innovative project works with a business support structure: (i) administrative and financial support, (ii) human support necessary for the operational and strategic structuring of the company and (iii) the means to measure market response. This last benefit can help when reflecting on the business model, defining customer segments, undertaking targeted surveys or even developing a user-centred approach along the lines of design thinking. All these actions make it possible to gain traction commercially and to make progress towards a proof of concept. This is essential for the initial fundraising round to succeed and to open access to further private financing
at a later stage. The start-ups that succeed in their fundraising are, in a way, privileged, and the role of business support structures is fundamental well in advance of this stage. The diagram below, which displays the resources available for innovative businesses depending on their level of maturity, clearly shows the stakes surrounding support provided in advance. It shows in particular ‘Death Valley’, a tricky step where many startups fall and unfortunately do not get back up.
Whilst financing mechanisms for innovation are available in the technological and economic maturation phases, difficulties arise during the proof of concept phase and at the commercial launch. This is the moment when the start-up generally needs additional financial resources to boost its commercial traction but it also comes precisely when the start-up has used up all its equity funding. Not yet sufficiently desirable for private investors looking for commercial growth (and therefore waiting for proof that customers have validated the offer!), the start-up finds itself in danger of failing.
This state of affairs is all the more significant in non-metropolitan areas. Indeed, in such areas, private financial resources are less accessible, whilst the need is greater due to a lack of available technical skills (concentration of key profiles such as developers in large cities) and a smaller ecosystem (difficulty obtaining access to large customer accounts, industrial partners, financing specialists, etc.).
Detail of key steps in the creation and financing of a start-up
Therefore, the role of business support structures before fundraising is vital, particularly in non- metropolitan areas, to reduce the length of time spent in ‘Death Valley’ as much as possible. More precisely, this means delaying entry into this phase, all whilst anticipating coming out on the other side.
The support provided will therefore follow two complementary and interdependent axes (illustrated in the diagram above):
•The acceleration of technological and commercial maturation: the aim here is to guide the start-ups and give them the technical means necessary to realise their idea and bring it to market. The technical improvement and economic development of the project feed into each other in an iterative process centred on the customer. Ultimately, we get to the minimum viable product and an initial
commercial proof of concept.
• The administrative and financial engineering aiming to maximise leverage effects: to obtain the proof of concept, financial resources need to be anticipated, mobilised and optimised to meet technical, human and commercial needs. However, sometimes start-ups do not sufficiently understand the innovation financing chain, particularly the mechanisms offered by structures like Bpifrance or regional authorities, which are increasingly at ease with their economic jurisdiction. Beyond pure knowledge of these mechanisms, business support structures help start-ups to use them at the right time and to benefit from substantial experience in administrative engineering.
These two axes help to secure the start-ups’ development path and make them desirable to investors, whilst giving them more time and therefore greater negotiating power.
“Our incubator’s knowledge of innovation financing is a key factor in our current growth. It has enabled us to obtain proof of the technical and commercial relevance of our solution and, therefore, prepare our recent fundraising round of €1.5 million with greater peace of mind.”
Pierre Naccache co-founder of Asystom
C. Differentiated approaches based on the business support structures
Business support structures offer a wide variety of sizes and models, with varying profiles and therefore approaches.
Private structures, public structures, university structures
There are three main families of support structures that can be identified: those financed primarily by public subsidies, split into public subsidies and university funding, and those where private capital dominates. The structures financed publicly can concentrate on a wider range of topics and support a larger number of start-ups; for structures supported by universities or private capital, their economic equation requires them to take a more narrow and selective position in order to generate profits.
The public or private nature of the support structures also has an influence on how investors perceive the support.
We have analysed the top 20 support structures in terms of the average amount raised and the number of fundraising rounds supported. They represent 39% (i.e. a cumulative amount of approximately €1,750m) of the total amount of funds raised by incubated start-ups in France.
It is interesting to note that public support structures (excluding universities) represent 35% of this fundraising in number of rounds. Further, two of them hold the top two spots (Agoranov and Bpifrance Le Hub – see chart below). This success can be explained by the fact that their presence in a particular operation sends a very positive signal to investors: their engagement is a sign of stability, in that they do not necessarily favour short-term profitability, but take into account economic development, the strengthening of local ecosystems or support for strategic sectors.
Structures that have supported the most start-ups towards fundraising
Source: Study on the fundraising activities referenced by Capital Finance over the period 31 March 2017–8 April 2019
The top three support structures in France in terms of fundraising
Note: Four grandes écoles incubators place in the top 20 (Drahi -X Novation Center, ParisTech Entrepreneurs, Incubateur HEC and ESSEC Ventures). This can be explained by the fact that these structures aim to develop and apply scientific innovations, but also by their extensive networks of alumni, particularly among the main financing players (investment funds, banks, business angels, administration).
Venture capital structures, large group structures
For private structures, two main types of business plan stand out: one based on the integration of a venture capital activity, the other organised around strong links with a large group.
In the first case, the structures support a small group of high-potential startups, in which they also acquire shares.
Undertaking subsequent fundraising rounds is therefore a necessary condition of profitability for these players.
In the second case, the support structures have more of a technology-monitoring role for the group to which they are attached. Through them, the group takes a stake (often a minority interest), aiming to create new product and service lines that fit the core business of the group or to counter the risk of potential disruptions.
Generalist structures, specialised structures
Generalist structures generate 44% of fundraising; the remainder is generated by support structures with specific sectoral positioning. The information and communication technology, health and energy sectors are particularly well represented, covering 21%, 8% and 7% of the number of specialised structures respectively. As for the average amounts raised by sector, food, energy, telecoms and chemicals cover
the most significant amounts. Note that the average of the foodtech sector is inflated by the record level of funds raised by Wynd (€72 million), a start-up supported by ShakeUpFactory, an accelerator specialised in foodtech2.
Number of fundraising rounds by incubation sector
Source: Study on the fundraising activities referenced by Capital Finance over the period 31 March 2017–8 April 2019
More or less selective structures
A support structure’s ability to generate funds is not directly linked to the number of start-ups it supports – though the contrary could well be expected.
If we consider the 17 largest support structures in terms of number of fundraising rounds undertaken, they support on average 64 start-ups (49 excluding Wilco, which has supported over 300).
However, the three most ‘successful’ structures (Agoranov, Bpifrance le Hub and The Family) all support a smaller number (41, 55 and 17 respectively).
This reflects the varying degree of selectivity between support structures. In particular, an investment-fund-type structure like The Family will choose projects primarily based on their future ability to raise funds. Public structures like Agoranov and Bpi have selection criteria that include short-term fundraising prospects but also wider objectives, such as offering a commercial outlet for technologies developed in public research laboratories.
Number of start-ups supported and fundraising rounds undertaken by structure
Source: Study on the fundraising activities referenced by Capital Finance over the period 31 March 2017–8 April 2019
D. The limits of business support structures
The growth of the amounts invested in venture capital and the multiplication of business support structures hide an uneven situation in France, as well as the numerous inefficiencies that startups face.
First of all, though business support structures have certainly spread out across the country in recent years (particularly via the FrenchTech label), fundraising remains concentrated in Île-de-France. Start-ups in this area in 2019 represented 75%3 of the amounts collected in France and 97 in 10 of the most significant amounts raised. This can be explained by the fact that the majority of investment funds and business angels are based in Paris.
Further, as well-intentioned as the support provided may be, fundraising can be seen as a risk by entrepreneurs looking to retain control of the management of their company.
Finally, undertaking fundraising rounds remains a difficult task for start-ups: they have to invest significant human and financial resources in processes where the outcomes are unknown; they
often have to repeat the processes for each potential investor with no possible economies of scale; and they are often limited to a national target so as to reduce the number of physical meetings.
In this context, alternative modes of financing have started to emerge little by little, namely crowdfunding mechanisms (crowdfunding, crowdlending, crowdequity) and blockchain mechanisms
(Initial Coin Offering, Security Token Offering).
Number and value of fundraising rounds by region in France in 2019
Source: Accuracy and Eldorado analyses
2. THE INITIAL COIN OFFERING (ICO) PHENOMENON – THE WRONG ANSWER TO REAL PROBLEMS
A. The beginning of ICOs
The years 2017–2018 saw the rise of a new way of fundraising for start-ups: Initial Coin Offerings or ICOs, a play on the term IPO for Initial Public Offering. An ICO corresponds to the issue on the primary market of an asset (a token), of which the ownership and transactions are recorded on a blockchain4 ICOs quickly became an innovative means of raising funds for tech entrepreneurs.
In this process, tokens represent the future right of use of a service, with the token issue guaranteeing the financing. They almost equate to a voucher that can be resold on the secondary market. ICOs can therefore resemble crowdfunding, as the issue aims to finance a service that usually only exists at the fundraising stage of the project.
As soon as the tokens are issued, they can be exchanged directly on the secondary market. Their value is determined by the demand for the service that requires or results from their use. Investors therefore bet on the growing adoption of the service to maximise their return on investment. In addition, it is not rare for project owners to reserve a portion of the tokens issued for themselves in order to benefit from the success of their service.
How an ICO works
B. After seemingly replacing traditional financing means, ICOs are in great decline
After the very first ICO in July 2013 by the Omni project, these operations multiplied in parallel with a significant increase in the market capitalisation of cryptocurrencies, the primary investment vehicle for these operations.
Between September 2017 and November 2019, more than $29 billion was raised in the world by this type of operation, primarily at the end of 2017 and in the first half of 2018 (between September 2017 and December 2017, the capitalisation of this market quadrupled). It should be noted that in France, the number of these operations over the same period was more limited, with only 48 ICOs and $153.6 million raised5. However, this is not representative of the situation in the country, insofar as numerous entrepreneurs chose to organise their ICOs elsewhere6.
Whilst the success of this method of financing can be explained initially by a certain number of specific circumstances, such as the strong increase in the capitalisation of cryptocurrencies, other structural factors need to be considered in order to take the full measure of this phenomenon.
First, the relative efficiency of this fundraising, that is, the ratio between resources (both human and financial) mobilised by a start-up and the amounts collected is significant. For a small company, issuing tokens on the primary market is in theory less expensive than classic fundraising and makes it possible to access a greater number of investors. As for larger companies, an ICO is also less expensive than issuing regulated financial securities.
Breakdown of funds raised in France and across the world
in 2018 and 2019 by fundraising process
This mechanism also has the advantage of security, with the blockchain by its very nature unable to be hacked. Finally, it offers better liquidity because the securities can be easily exchanged, in contrast to a direct investment in a start-up, something that is far from liquid.
The strong growth of the amounts collected during ICOs could give the impression that they would become the indispensable fundraising tool for innovation. However, since the second half of 2018, the amounts collected, as well as the number of ICOs, are in net decline: $2bn per month on average between September 2017 and August 2018, against $0.36bn (i.e. five times less) between September 2018 and November 2019.
Such a slowdown can be explained in particular by the fall in the market capitalisation of cryptocurrencies. Indeed, the loss of value of cryptocurrencies has reduced investors’ available funds and has weakened the cash positions of companies that retained amounts raised in the form of cryptocurrencies. The inordinate exposure of some projects to price fluctuations has led to numerous bankruptcies, which has also highlighted the complete lack of protection for token-holders. The fall in the number of ICOs can also be explained by the numerous fraud schemes and scams that plague this type of operation, of which some have featured heavily in the media.
Despite this mixed balance sheet linked to a passing trend for cryptocurrencies, ICOs remain important for three reasons: (i) they represent an expression of mistrust towards historical players in fundraising (advisory services, traditional investors, and business support structures), (ii) they serve as a means of obtaining financing for those that could not obtain it elsewhere and (iii) they are attractive to those with a pronounced desire for innovation. It is based on these elements that a new generation of financing models is now being built.
Amounts collected by ICO, number of ICOs and market capitalisation of cryptocurrencies
A. Initial Exchange Offering: a return to trusted third parties?
An Initial Exchange Offering (IEO) is an ICO undertaken directly on a cryptocurrency exchange platform (an exchange). In this process, the exchange plays the role of the trusted third party. By sorting through the projects and undertaking sufficient due diligence on them, the exchange guarantees investors that the project is serious and, in particular, that it has potential. Indeed, the exchanges7 have the skills necessary to value the projects and put their reputation on the line by accepting to list them – thus providing the projects with the liquidity that investors seek.
The IEO practice has been growing since the end of 2018: whilst the number of ICOs has been consistently falling since March 2018, the number of IEOs has been consistently rising since January of the same year.
This recentralisation is somewhat paradoxical in light of the decentralisation aims of blockchain, but it makes it possible to assuage the concerns of investors. By repurposing the principle of the ICOs, IEOs show that this innovative process can still be relevant for certain projects, whilst conserving the far from negligible advantages it has over traditional fundraising, provided that asymmetry of information between start-ups and investors is remedied. It highlights that only a thorough analysis of the business model, the addressable market and the benefit of blockchain technology can guarantee a project’s reasonable chances of success.
In this sense, the ICO experiment makes the case for the increasing involvement of trusted third parties such as advisory firms, support structures and, in the case of IEOs, exchanges.
Amounts collected by ICO / IEO and market capitalisation of cryptocurrencies
A Security Token Offering (STO) corresponds to an issue on the financial securities primary market, represented by a token on a blockchain. Far from a simple voucher like in the context of ICOs or IEOs, a token here gives rise to the right to future revenues, either fixed or variable depending on their configuration. As the financial securities are subject to strict regulations, investors have potential recourse against ill-intentioned start-up entrepreneurs.
Blockchain here represents only the technological infrastructure on which the transactions are registered. However, it opens up wider perspectives in terms of the digitalisation of financial securities and beyond. The acquisition of a €5 million stake in Tokeny Solutions8, a Luxembourg start-up specialised in the “tokenisation”9 of financial assets, by stock market operator Euronext, follows this dynamic.
Indeed, blockchain characteristics make it possible to consider the lowcost securitisation of any type of asset, whether works of art, financial securities or real estate. The liquidity of these securities on the secondary market is not yet guaranteed due to the lack of exchange platforms with the necessary licences and sufficient volumes, but dozens of projects across the world are under development in this area.
Examples of STO projects in progress around the world
Blockchain also represents an opportunity to develop innovative financial securities, which for example allocate to investors a percentage of certain elements determined in advance, such as operating profit or profit before tax. Whilst this type of instrument was already possible before blockchain, the automation made possible by smart contracts10 changes the cost/benefit equation. This would mean generating wider interest among investors, in exchange for less involvement in governance.
Though the ICO phenomenon is losing momentum, business support structures can learn a great deal from it, despite first perceiving it as a threat. It has revealed the complexities and the excesses of ‘traditional’ fundraising and is an opportunity for the innovation financing ecosystem to reinvent itself with new hybrid mechanisms.
And it is not the only driving force. Indeed, the digitalisation of operations is growing with more and more services being performed remotely: support, events, conferences, fundraising, etc. This revolution in our way of working is leading to a greater number of structures with no physical locations and off-site support programmes.
In addition to adapting to this innovative way of working, business support structures will have to adapt to new market requirements: improve their ability to operate in non-metropolitan areas by mobilising relevant investors outside major cities or develop their mastery of cutting-edge technologies such as blockchain, which makes it possible to digitalise complex operations without jeopardising data security.
Between the search for profitability, new start-up needs (mobility, flexibility, transparency), developments in utility (crowdfunding) and technology (blockchain), and changes in their ecosystem (digitalisation, globalisation, decentralisation), business support structures need to be as innovative as their innovative clients!
Notes
1 Accuracy study led between 31 March 2017 and 08 April 2019 on over 650 fundraising rounds undertaken by 380 incubated start-ups
2 Wynd operates mostly but not exclusively in foodtech
3 Accuracy analyses
4A blockchain is a distributed database that cannot be falsified. It can only be modified by increments. For further details, see the book: Blockchain – The key to unlocking the value chain, M. Della Chiesa, F. Hiault, C. Téqui (Eyrolles, 2019)
5 According to Accuracy analysis of CoinSchedule data 6 Les Echos, « ICO : les start-up tricolores boudent la France », 06/06/2018
7The exchanges are the largest players in this new ecosystem; by their size alone, they give investors confidence
8 Les Echos Investir, “Euronext prend une participation de 23,5% dans la fintech Tokeny Solutions” (Euronext takes 23.5% stake in Tokeny
Solutions fintech), 01/07/2019 9Tokenisation describes the act of creating a token on a blockchain thereby materialising the ownership of an element external to the blockchain – for example shares in financial assets
10 A smart contract is a transaction that is conditioned and programmed on a blockchain
Accuracy received the Great Place to Work certification for all of its participating offices.
What is the Great Place to Work® Certification?
Great Place to Work® Certification is the most definitive ‘Employer-of-Choice’ recognition that organisations aspire to achieve. The Certification is recognised all around the world by employees and employers alike and is considered the ‘Gold Standard’ in identifying and recognising Great Workplace Cultures.
Accuracy conducted sell-side financial due diligence for Tishman Speyer in the context of the sale of the Lumière building (Paris) to Primonial and Samsung.
Accuracy conducted sell-side financial due diligence for ADIA in the context of the sale of property assets to Amundi Immobilier and Crédit Agricole Centre Est.
Accuracy is announcing the promotion of three new partners. These promotions take place in the context of Accuracy’s continued growth since its creation 15 years ago. Today, Accuracy has locations in 13 countries and counts some 450 consultants and 50 partners.
Accuracy is sponsoring a series of events called “Arbitration Leading
Minds”, organised by King’s College London and the Spanish
Arbitration Club.
During these events the attendees will get to know better the leading
arbitration practitioners through one-on-one interviews.
The content of the interviews will combine personal and professional
anecdotes with a substantive discussion of current legal issues in the practice
of commercial and investment arbitration.
The first event will take place on February 5. More information and registration details here:
Accuracy conducted sell-side financial diligence for the shareholders of Elivie, in the context of Ardian’s entry in the share capital of Elivie’s subsidiary Santé CIE.
Accuracy advised L. Possehl & Co. mbh in the context of the acquisition of the European Foundation Group B.V. (EFG). The Group specialises in providing foundation solutions based on screw piles for buildings, homes, industrial and infrastructural works, as well as soil drilling solutions in Germany.
The unbridled rhythm of innovation, the risk of disruption, the volatility of clients and the dearth of talents. These are all factors pushing large groups to innovate not only quickly, but also efficiently. This innovation imperative requires, in particular, large companies and start-ups to come together.
However, large companies wanting to support start-ups is not enough to make the collaboration work. If innovation clearly constitutes a bridge linking the worlds of large groups and young businesses, its foundations can be weakened due to strategic objectives and ways of working that are structurally different.
Our mapping of business support structures in France makes it possible to understand the primary trends in the French innovation ecosystem, as well as the performance levers able to be used to support start-ups:
A. Large French groups are actively engaged in supporting start-ups. This is the case for 90% of CAC 40 companies, which have created their own structure, participated in multi-company schemes or joined existing structures.
B. Initially centred on the Parisian region, French innovation is developing rapidly outside the capital, with larger cities welcoming more and more support structures.
C. The trend is for structures specialised in the supporting entity’s sector(s) of activity: this is both a means of differentiation and a performance factor. It also enables the large group to integrate created value more easily into its own activity.
D. Five axes of reflection make it possible to define the most appropriate format of support for the large group’s strategic objectives. Four relate to the solutions provided: hosting, human, technical and financial resources; the fifth relates to the level of maturity of the start-up.
E. A third-party expert is essential to implementing the support strategy, but also its governance. It is a question of facilitating cooperation and maximising value creation between the parties involved, which may have extremely different cultures!
INTRODUCTION
Whether it is a matter of the heart or the head, the union between large groups and young start-ups today is vital to securing growth levers in a world under constant transformation.
But how can large groups get their bearings among the myriad possible support formats? From incubators to accelerators, via co-working spaces, company nurseries, fablabs and corporate venture capital, what criteria should a group choose to find the right support structure to meet its strategic objectives? Should it specialise in its own area of activity or stay generalist, ready to capture value wherever it can be found?
Accuracy has undertaken a mapping of French support structures to provide the necessary keys to fully understanding the ecosystems in place. This vision will make it possible to judge which third-party experts can provide assistance in finding a truly productive and profitable approach.
1. INNOVATION IS UNDERGOING A REVOLUTION!
HOW TO TAKE ADVANTAGE OF IT TO CREATE VALUE?
A. From absorption to support
In an ever more uncertain environment, innovation is no longer restricted to internal R&D investments, patent portfolio management and the integration of outsourced technologies. It is now closely linked to risk-taking, through investments in audacious projects: to stay in the race, companies have to bet on (more or less young) disruptive entrepreneurs.
The majority of large companies initially adopted a strategy of absorption. This was sometimes aggressive and destabilising for the entrepreneurs, and often inefficient in terms of innovation. However, previous failures and the appearance of new open innovation tools have boosted new practices. Today, 90% of large groups favour start-up support structures, either by creating their own or by sharing or delegating the management of it.
Management of support structures in which CAC 40 companies invest
For example, the Vinci group created its own structure, “Léonard”, which among other things, stimulates intrapreneurship. It is all at once a start-up incubator, a co-working space and a meeting place for actors in municipal/regional transformation. As for Airbus, it signed a partnership with the incubator Centrale Audencia ENSA Nantes. In addition to the services provided by the incubator itself, those working there have access to a dedicated space (technical showroom and co-working space) able to host their intrapreneurial projects.
Other companies prefer to ask a third-party expert to set up their support system. For instance, AstraZeneca asked a pure player, Interfaces, to create and then manage its “Realize” programme, which aims to innovate in terms of a patient’s journey, data management and scientific innovation in the field of oncology.
B. A more and more balanced regional network
Paris and the French desert? Not so fast… It is not surprising that the capital is the nerve centre of French innovation: it boasts 26% of existing structures, including the top performing ones and those receiving the most media attention. However, the other regions of France are not to be outdone: major regional cities are also giving themselves the means to play a role in the race for innovation.
Indeed, the French ecosystem has a network of support structures that is becoming more and more complete. More than 700 municipalities have at least one support structure, and all regions are seeing their number of structures increase.
Breakdown of support structures in France
Our quantified analysis makes it possible to take an inventory of the situation and to predict the future dynamics of each region. Ile-de-France shows an innovation support ecosystem that is already relatively mature, whilst the other regions, even those already well developed such as around Bordeaux and Toulouse, continue to show strong growth prospects.
In short, France’s innovation ecosystem is rather logically based on the economic dynamism of the different regions and seems to form a Sun Belt à la française, which starts in Rennes and descends all the way down to the Nice region, passing by Bordeaux, Toulouse and Montpellier.
C. Innovation ecosystems more and more specialised by sector
In this regionalisation of innovation, certain areas have chosen to rely on their economic history to create specialised channels by sector. But is it better to go generalist or specialist? The majority of large groups have had to make this decision, with each adopting the strategy that seems most relevant to its strategic and economic imperatives.
However, the fact is that specialisation is gaining ground. Themed platforms now make up a significant proportion of the support structures in France. This may be because, on the one hand, the added value of the support may be significantly larger, and on the other, companies generally seek benefits in their areas of activity. Moreover, specialisation is a differentiation factor in the face of increasing competition following the rise in the number of support structures in recent years.
The development of the banking sector illustrates this transformation perfectly. Since 2014, Crédit Agricole’s “Village by CA” has spread throughout France, in line with the presence of its regional head offices, and regardless of the sector of application. Its aim is to assist entrepreneurs by providing coaching, a potential network of business partners and mentoring by bank employees, in the hope that they then become suppliers or clients of the bank. All other large banks have followed suit, creating their support structures, but sometimes limiting them to their core business lines. For example, “Plateforme 58” by La Banque Postale, is active in banking and insurance, as well as in financial technologies, health, education and services. As for BNP Paribas with its acceleration programme “Bivwak!” (in addition to “WAI”), HSBC with “Lab innovation” and Société Générale with “Swave”, they concentrate on innovations that are applicable to the bank’s business lines, supporting fintechs and insurtechs.
In this specialisation trend, certain sectors seem more attractive than others. The graph below clearly shows the areas that are over-represented in the innovation ecosystem when looking at their market size. In all probability, the greentech, fintech, biotech, and agritech sectors, but also media and communication, will drive innovation for the next few years. Hence why it is important for companies to position themselves now to secure the creation of value tomorrow!
Investment by specialisation
2. HOW TO CHOOSE THE RIGHT FORMAT FOR START-UP SUPPORT AND MAXIMISE RETURN ON INVESTMENT?
A. What type of structure for what strategic objectives?
Even if 90% of CAC 40 companies have chosen to invest in at least one start-up support structure, the format used is not always appropriate to achieve their strategic objectives.
There are multiple types of support structures. The services offered vary, ranging from simple hosting services to the provision of machine tools for prototypes, access to mentoring or bespoke acceleration programmes, the organisation of networking events and also assistance with financing. So how should a large group choose the most appropriate format for its strategic objectives?
Of course, it should start by clarifying these objectives, which underpin its investment logic. Is its ambition to obtain a quick return on investment? To participate in the development of a region to make it more dynamic? To monitor technology closely in order to integrate any developments by the start-up as quickly as possible? To face human resources challenges through intrapreneurship, the recruitment of new talents, the employer brand or the sharing of new ways of working?
Defining these objectives make it possible in turn to define the type of start-up to target (in particular, in terms of maturity) as well as its associated needs (hosting, technical means, human resources, and financial means). The relative weighting of these five elements therefore determines the most relevant support structure, in light of both the strategic priorities of the large group and the actual needs of the start-up.
Indeed, the mapping below presents the different support ecosystems that exist in France, based on the relative weight of each of these five criteria.
Mapping of main support structures
By way of example, incubators are essentially aimed at communication objectives, HR issues and technology capture, their offer mostly includes hosting, coaching or mentoring, and is aimed more at start-ups. As for fablabs, the objective is less geared towards communication and more towards the development of talents and regions. For that reason, they tend to deal with more mature projects (often in their prototyping stage), for which a large group may supply significant technical means.
B. The thorny question of governance: a trusted third party to make alliances last
Once the structure has been identified and fully considered, the difference – as usual – resides in execution.
First, to attract the most promising start-ups, groups must ensure that they bring a differentiating factor to the table. It is for this reason that Univail-Rodamco-Westfield offers the opportunity to test innovations and business models in its shopping centres, whilst the highly active communication surrounding EDF Pulse provides a strong level of exposure.
Second, supporting start-ups is an investment project just like any other, and in this respect, it requires the rigorous monitoring of KPIs defined in advance. This performance steering, whether it be through strategic partnerships, equity investments or support programmes, raises the tricky question of how much independence is necessary to innovate. How can a large group implement a governance structure making it possible to provide support to the start-up but without suffocating it? Adapting internal processes so as not to stifle the start-up’s development with too much rigidity, involving top management to strengthen the legitimacy of the programme internally, communicating regularly but not intrusively… There are many different success factors, the implementation of which may require the presence of a trusted third party.
This third party can contribute to building a bespoke support programme and supervising it once in place, particularly in the case of multi-company structures such as “Plant 4.0”, which groups together Total, Vinci Energies, Solvay, Eiffage, Orano and Air Liquide.
The trusted third party must understand the advantages and disadvantages of each type of structure to create a bespoke programme that responds effectively to the large group’s strategic innovation challenges. But above all, it must be a bridge between the large group and the start-up: indeed, these actors each have differing strategic objectives, which only converge when it comes to innovation. Accuracy can be the trusted third party, acting to orchestrate, coordinate and optimise cooperation in this “shared space”.
1Relationship between the number of existing support structures in the region and the entirety o