Generative AI to unleash developers’ productivity

Mid-June, I wrote about “Leveraging Artificial Intelligence in Software Development.” McKinsey & Company just published a study that “shows that software developers can complete coding tasks up to twice as fast with generative AI.” Not surprisingly, generative AI can be used for code generation, code refactoring, and code documentation, speeding up these activities by 20 to 50 percent.

The purpose of Generative AI is to assist developers rather than replace them. It is important for developers to have solid coding skills and to dedicate time to learning how to use Generative AI effectively. Generative AI won’t replace developers in integrating some organizational context (e.g., integration with other processes and applications), examining code for bugs and errors, and navigating tricky coding requirements.

As per McKinsey’s research, generative AI shined and enabled tremendous productivity gains in four key areas: expediting manual and repetitive work, jump-starting the first draft of new code, accelerating updates to existing code, and increasing developers’ ability to tackle new challenges.

The transition to coding with Generative AI will take time to happen. Technology leaders must train and upskill their development teams, start experimenting early, and deploy risk control measures. Risk control must cover many topics, including but not limited to security, data privacy, legal and regulatory requirements, and AI behavior.

Improving developer productivity through generative AI is a journey that will take some time. It is crucial for companies, particularly regulated ones such as asset and wealth managers, to begin experimenting with it. So that they can also better understand regulatory and security constraints and understand how to best address them.

Enterprise Data Paradigm Shift for Financial Institutions

This article has been co-written with Rémi Sabonnadiere (Generative AI Strategist – CEO @ Effixis) and Arash Sorouchyari (Entrepreneur, Speaker, Strategic Advisor for Fintechs and Banks).

This is the next episode of Time for an Enterprise Data Paradigm Shift.


The banking industry relies heavily on data-driven insights to make informed decisions, but gathering and consolidating data can be a slow and difficult process, especially for large financial institutions. Consider The Financial Company, a fictive global wealth manager with billions of assets under management. The firm has grown quickly through multiple acquisitions, resulting in a complex IT landscape with various Investment Books of Records (IBORs) and data repositories.

At The Financial Company, it can be a challenge for business users to find out how much the company is exposed to a specific country or sector. To get this information, they have to request the IT department to create custom queries from several databases and then wait 1-2 business days to receive the answer. This process is time-consuming and not efficient.

One commonly used approach to solve that challenge involves utilizing business intelligence and data visualization tools like Microsoft Power BI. This approach involves the IT department creating a solution tailored to the specific needs of the business user. However, this approach could be more efficient as it is only reactive and not easy to scale. Each new query or use case requires a new customized solution, which often leads to copying more data into an existing data warehouse or creating a new one. BI developers must identify the correct data in various databases, gain access to them, create extraction procedures, and adjust data warehouse structures to receive the data.

Imagine if business users could get real-time answers without depending on the IT department. This is where the paradigm shift occurs – using Generative AI to change the data retrieval process from a query-based model to a prompt-based one.

Moving From Query to Prompt

Generative AI brings an innovative shift by placing a Large Language Model (LLM) powered agent on top of multiple databases, eliminating the need for never-ending and costly database consolidation. This approach requires two key elements:

  • Database Crawlers: To gather data from numerous databases, files, and services with different API technologies is a significant challenge. Database Crawlers can help by connecting to multiple databases, reading their schemas, and comprehending them. These Crawlers can function as domain agents that possess knowledge of a particular domain’s data and context. They are aware of the databases and structures within their domain, eliminating the need for model discovery with each request.
  • Generative Prompt: The generative prompt helps interpret user requests, generate query codes, and gather data from multiple databases. The consolidated data is then presented to the user. The prompt can seek user assistance if there is any uncertainty in selecting the appropriate data sources and fields.

By leveraging the exceptional text-to-code abilities of Large Language Models as well as their ability to understand very well both human questions and data dictionaries, it creates an intelligent layer capable of answering many requests in a reliable, explainable, and intuitive way. The benefits for an organization are numerous.

Key Benefits

Instant Access and Enhanced Decision Making

Generative AI offers banks immediate and reliable access to data, thus empowering real-time decision-making. The ability to query data easily and access it in real-time enables banks to rapidly recognize potential risks and opportunities and make informed strategic decisions.

Improved Data Completeness and Accuracy

By accessing data from various sources and utilizing intelligent agents, Generative AI ensures databases are complete and accurate. This significantly reduces errors and improves overall data quality, ensuring that decision-making processes are grounded on current and comprehensive information.

Bridging the Skills Gap

GenerativeAI eliminates the need for advanced technical skills, as business users can interact with the system using natural language queries. This bridges the skills gap, allowing users to derive the necessary insights independently and fostering a self-sufficient environment.

Scalability and Flexibility

Generative AI systems are inherently scalable and flexible. They can adapt to changing business needs and accommodate new use cases effortlessly. Instead of creating individual solutions for each query, the AI system can dynamically handle various requests irrespective of the underlying database management systems and data structures. This adaptability allows banks to remain agile and swiftly respond to new data demands.

Cost Reduction

Generative AI removes the necessity for expensive data migration projects by allowing data retrieval from current, dispersed sources. This leads to significant reductions in both time and expenses associated with data consolidation.

Addressing Data Challenges

Data Gathering and Data Quality

Generative AI also utilizes data healers to enhance data quality. However, accessing these data sources with crawlers entails challenges such as access rights, filtering data based on user rights, identifying inconsistencies, merging data, and avoiding overloading transactional databases with queries.

By adopting a domain-based agent approach, each domain agent ensures that performance, access rights, and other issues are tackled. The agents are developed by the respective domains and are equipped to provide answers related to their data model across all databases. Moreover, AI doesn’t bypass the need for IT expertise but enables them to create intelligent agents that can autonomously answer future queries.

Additionally, AI can search online sources for relevant data to deal with incomplete databases. For example, by analyzing articles, the AI system can identify companies associated with the oil and gas sector and create an extra column named “Industry_AI_generated”, which can be automatically populated with pertinent values.

Minimizing System Overload

In order to avoid system overload, domain agents should use tactics like read-only database instances, setting up local data storage, or utilizing performance-optimized services, particularly if dealing with transactional databases. It is the responsibility of each domain to handle performance concerns effectively.

Way Forward

Banks can benefit from using Generative AI, specifically LLM-powered agents, to retrieve data from multiple databases. Although AI isn’t a complete fix, having agents that are knowledgeable about their specific domain can greatly help alleviate the issues. These agents act as important components in the data retrieval process, as they’re familiar with the context and data of their domain.

It is important to understand that this technology does not replace the need for IT expertise. Rather, it repositions IT to create intelligent agents that can autonomously answer future queries. This approach aligns with the data-mesh strategy and is a transitional phase that helps IT departments focus on long-term strategies for data management and legacy system transformation.

Banks should begin testing this technology to discover its potential as a game changer. By doing so, they can transform into a data-driven company more efficiently than they anticipated. If you are interested in learning more about this approach or running a proof of concept, please contact info@effixis.com.

We will soon publish an exciting new episode, where we will introduce a cutting-edge prototype powered by Generative AI. Stay tuned!

Leveraging Artificial Intelligence in Software Development

Artificial Intelligence (AI) offers diverse applications in software development that will drastically change how firms develop software. It can:

  • Support developers by accelerating coding tasks, leading to faster and higher-quality code.
  • Document existing codes that have no documentation.
  • Help developers to appropriate codes that are not theirs.
  • Debug codes.
  • Accelerate or even automate the migration of legacy stacks to more modern technologies.

Within the next 6 to 18 months, most software development tools will integrate some artificial intelligence to support developers. On the one hand, there are traditional players like Microsoft with its Copilot. But competition is building up with solutions from Tabnine, Codeium, and CodeComplete, to name a few. And you can expect all products for data science like Databricks, Hex, and data iku to integrate some “copilot” to support users and developers.

There are big questions on the intellectual property of the code generated by artificial intelligence, and the code firms share with these solutions. Everybody knows the horror story of Samsung using ChatGPT to debug some proprietary and confidential code. ChatGPT eagerly consumed the data, using it as training material for future public responses.

The rise of artificial intelligence will not render developers obsolete. Instead, it offers a unique chance to establish a harmonious collaboration between humans and computers. Developers should see Artificial Intelligence as a new colleague with superpowers. By delegating repetitive and mundane tasks to AI, developers can devote more time to creative problem-solving and embark on a journey of enhanced productivity and innovation.

Another interesting topic, not linked to artificial intelligence, is how financial institutions recruit and deploy developers. The traditional way has been to hire and locate them in internal facilities. But the war for talent has made it very difficult to hire outstanding developers, not to mention that they often don’t want to be employees or don’t want to be in a specific location. They want more freedom. An ecosystem of secure software development solutions is becoming available in the public cloud and from specialized providers like StrongNewtork (www.strong.network)— time for financial institutions to start looking into this.

Time for an Enterprise data paradigm shift

Looking back at the significant data trends over the last 20 years, we have moved from relational databases to data warehouses, data lakes, and now data mesh. We can insert a few more concepts in between, like non-relational databases (NoSQL), data virtualization, the move from on-premises to the cloud, and more. But have we succeeded and made significant progress in managing and mastering the Enterprise data? The results are somewhat mixed.

On paper, data mesh is an attractive idea and makes sense, with the concepts of domain owners, data-as-a-product, self-service, and federated data governance. But implementing it will be challenging and take a long time. Not to say that getting there hundred percent is probably an illusion.

Many data simplification and rationalization projects have delivered too little, not to say they have failed. There are multiple reasons for this. First, let’s recognize that it’s a complicated problem to solve. Second, there is always something more important to do in terms of new critical data requirements. Thirds, data requirements keep evolving, and there is no perfect “master” data model that can handle everything. I’ll stop here as my objective is not to provide an exhaustive list.

Maybe it is time to consider a different data paradigm and approach the problem from a different angle. While we should continue to simplify the data landscape and put better data governance in place – I would definitely push the concept of data mesh – we should also recognize that this will be a very long journey and that some data “mess” will remain for a very long time, if not forever. So why not acknowledge and accept it and link all this data “mess” together via an abstraction layer? And I am NOT talking about data virtualization like Denodo and others would think about it.

This is where the latest advancements in artificial intelligence will play a crucial role. What I am proposing here would not have been possible 5-10 years ago. We need two things: 1) an intelligent engine to connect disparate databases and 2) some generative AI to help the user, whether a human or a machine through APIs, to get the data she needs from these disparate databases. These tools exist today and could be deployed across the Enterprise. I am currently discussing the idea with a Swiss startup, and some proof of concept could work within a few months. A full deployment could also be very fast.

This could revolutionize our thinking about Enterprise data. Stay tuned. I will continue to discuss this topic over the coming weeks and months.

#digitaltransformation #datamesh

Back to building monolith applications?

Most experienced engineers would likely recommend implementing microservices, APIs, and serverless architectures in the current technology landscape. And not dare to talk about building a monolith solution. But this is what one Amazon Prime Video team did because of too high infrastructure costs and scaling bottlenecks. By ditching serverless, microservices, and AWS Lambda, they saved 90% of their infrastructure costs and solved their performance issues. Read the full story here.

Implementing the latest infrastructure and development concepts for the sake of it does not necessarily bring the best solutions. It’s like pushing database normalization to the extreme, to the detriment of performance. Denormalization is often necessary.

Every case is different when building an [new] IT solution and comes with other requirements. Different IT architectures will have various pros and cons, and no solutions will be perfect. You must challenge your team to think out of the box, assessing “modern” ways of doing things while not excluding traditional ones.You must also recognize the existing IT landscape and technical debt because it’s not like you can erase everything and start from scratch. If necessary, build a minimum viable product to prove your proposed architecture is scalable and delivers the required performance.

Paradigm shift to replace the legacy technology stack of banks [and wealth and asset managers]

The McKinsey & Company article “Banks’ core technology conundrum reaches an inflection point” presents an insightful perspective on the core technology challenge that banks are currently facing, which is now reaching a critical point. While this issue has been discussed for a long time, two factors are making the situation more pressing than ever. Firstly, banks will soon face a talent shortage in their legacy technologies. At the same time, they will have to fight for talent in new technologies. Both talent shortages will put significant pressure on their ability to maintain and evolve their systems. Secondly, legacy technologies are consuming a growing share of banks’ budgets, leaving them with limited resources to pursue strategic initiatives that can drive innovation and transformation.

Another interesting element discussed in this article is how Thought Machine is thinking about solving part of the problem by running products as code and making them independent of the platform, which can be composed of tens if not hundreds of different systems for incumbent banks: “We have a system of smart contracts that run on the platform, but they’re separate from it” says Paul Taylor, founder and CEO of Thought Machine. Brian Ledbetter, a senior partner at McKinsey & Company, also brings up the concept of putting risk controls in code and not in processes for risk management. After infrastructure as code, which we have been discussing for quite some time, we are adding controls as code and products as code, significant paradigm shifts that are complicated for incumbent banks still dealing with mainframes and systems that are 20+ years old.

The challenge of legacy IT stacks and technical debt for incumbent banks has been discussed for decades. Incumbent banks must not look at this as a systems replacement, but as an enabler and a necessity for their future. To be successful, incumbent banks must educate their business on technology, have a technology talent strategy, and bring people to the center of their digital transformation.

Interesting video from the CEO of Thought Machine.

Risk Management

Once in a while, I will discuss some topics that are not totally related to the digital transformation of wealth and asset managers. This is one, even if I could argue that a digital transformation cannot be run without taking and managing some risks.

If we want to talk about risk, not many activities are more dangerous and relevant than mountain climbing. Jimmy Chin, an American professional mountain athlete, photographer, film director, and author, discusses risk management in the context of climbing at a Goldman Sachs talk. 

The first element he brings in is embracing failure, especially since there are a lot of failures in climbing. Their first attempt to climb Meru Peak (a mountain in the Garhwal Himalayas) failed. On their way down, they were already making decisions about things they had to change for their next expedition, like being lighter and taking warmer sleeping bags.

A second element is embracing the process, managing the variables you can control, and identifying those you cannot control. By embracing the process, you will focus on everything you need to get together to succeed, not only on the ultimate goal (in Jimmy’s case, reaching the summit.) That’s how you get there.

The third element is fear, which can be healthy, as it helps sharpen senses and motivates. Fear is not helpful when it becomes paralyzing or turns into panic.

Not surprisingly, a key component in risk management is anticipating all the potential problems that can emerge and having pre-defined solutions. When a risk materializes, taking stock of the situation and identifying the perceived and actual risks are essential. And really focusing on the actual risks.

Jimmy also brings the notion of trust and understanding how people function in different situations.

Reference:
Listen to the talk. There is much more there. Goldman Sach, Jimmy Chin talk: https://www.goldmansachs.com/insights/talks-at-gs/jimmy-chin.html

2023 Gartner Emerging Technologies and Trends Impact Radar

Gartner has released its 2023 Gartner Emerging Technologies and Trends Impact Radar. Let me try to give it a read with the eyes of a wealth
or asset manager.

Artificial Intelligence is all over the radar, with Foundation Models, Self-Supervised Learning, Generative AI, and more. Wealth and asset managers must seriously look into Artificial Intelligence and understand how it can be leveraged across their value chain, from investments to operations, risk management, and compliance. Part of the solution will come from their solution providers, like Bloomberg, BlackRock Aladdin, or State Street Alpha, to name a few providers. But wealth and asset managers cannot only rely on their providers. Instead, they must acquire the necessary skills and talent and play with artificial intelligence technologies. War for talent will make it complicated.

Blockchain is also quite present with Web3 and Tokenization, both being in the 3-6-year horizon. Most wealth and asset managers are already testing tokenization in one way or another, and they should continue. Tokenization will have many benefits for the industry, from speeding up transactions, eliminating some intermediaries and therefore reducing costs, making some asset classes available to smaller investors, improving the liquidity of some assets, and more. Tokenization will require some industry alignment and standards.

No surprise, Digital Twins are here too. Gartner is probably thinking more about Digital Twins in the context of manufacturing and industrial activities. But as discussed in this blog, the potential for digital twins in the financial industry is real and massive.

Then, there are hardware and infrastructure with Neuromorphic Computing, 6G, and Hyperscale Edge Computing. If wealth and asset managers continue to move to the cloud, they’ll be able to leverage these latest hardware technologies as they become available in the cloud.

Using digital twins in wealth and asset management

I have always been fascinated by digital twins and the potential they offer. There are many examples of companies using them. BMW has partnered with NVIDIA and uses real-time digital twin factories to optimize its production and conduct predictive maintenance. Emirates Team New Zealand uses digital twins to design and test its boats. SpaceX uses a digital twin of the Dragon capsule to monitor and adjust trajectories, loads, and propulsion systems. McKinsey says that companies can achieve ~50% faster time-to-market, ~25% improvement in product quality, and ~10% revenue uplift with digital twins.

Let’s start with some definitions. What is a digital twin, and how does it differ from simulations and standard CAD (Computer-Aided Design)? Simulations are usually limited to one process (i.e., narrow scope) and do not leverage real-time data. In contrast, digital twins are a virtual representation of a real, complete system fed with real-time data, lasting the system’s entire lifecycle. They allow rapid iterations and optimization of the system. The next big thing is linking digital twins to augmented and virtual reality, interconnecting digital twins, to finally creating the enterprise metaverse.

How about we use digital twins in wealth and asset management? I am hesitant to say that we have already been using them for a long time to model portfolios, test investment strategies, and assess the impact of certain events, to name a few examples. A “purist” might say these are more simulations than digital twins. And this is correct in many cases. The backtest of a portfolio is a simulation. But when an asset manager builds models to optimize portfolios daily, using near-real-time data, it’s getting very close to being a digital twin.

Traditional wealth and asset managers have yet to fully leverage the potential of digital twins because their use of data is limited (which is not/less the case for quantitative asset managers), they could leverage more near/real-time data and alternative data, and they could leverage more data across their entire value chain, from market research to portfolio construction, product development, marketing, and sales and distribution.

Many solutions are available to wealth and asset managers to use and leverage more data. But it requires more than tools. It requires technical skills and talent. Investment teams must have developers within their teams to use advanced market research solutions. Product Development teams must learn data science (e.g., Python) to get the best out of markets’, customers’, and competitors’ data. And the IT team must support these platforms.

Another challenge is where to start and how to build digital twins. A mistake might be to try to build a full fledge digital twin at once. It’s better to start small and evolve the first version. A good suggestion is to run hackathons to develop prototypes quickly and test the initial concepts. And the beauty of the hackathon is that you will have a multi-disciplinary team working together with portfolio managers, product development guys, and engineers.

To be successful, wealth and asset managers must make this a Firm objective, driven from the top. They must invest in talent and team upskilling, and ensure the right innovation culture is in place.

Let’s look at the digital transformation of other industries – The Washington Post

Looking at other industries to think about innovation and how to leverage technology in wealth management is always interesting. In that context,
the digital transformation journey of the Washington Post led by Shailesh Prakash is very insightful.

They quickly recognized that The Post needed to achieve excellence in both journalism and technology. It was a radical transformation, moving the IT department from an IT system babysitting mindset, IT systems in which the newsroom staff had very little confidence, to a product development mindset, building and inventing digital products. Part of the journey was adopting agile and colocating the engineers with their partners from the newsroom. They decided to build versus buy, set a fast experiment and innovation culture, and developed an obsession with products. Attracting and retaining the best engineers was key in that journey.

Through their excellence in technology, they built a set of tools they could sell to other publishers: Arc Publishing (Arc XP) was born, generating tens of millions of dollars of revenues for The Post.

Deep dive into the digital transformation of the Washington Post with the University of Virginia case study. There are also plenty of resources on the web. Just search for Shailesh Prakash (who is now at Google). If you like this digital journey, I also recommend reading the Goldman Sach’s Digital Journey case study from Harvard Business School. A great read.