Conversation with Ariadna Font (Part 2)

Ari(Ariadna) Font is Co-Founder and CEO of Alinia, a startup leveraging generative AI for improving business-critical operations in specific industries that rely on policy alignment and trust. Ariadna was the former Head of ML Platform at Twitter, where she established responsible AI at the company level. Before she was Director of Development at IBM Watson. She has a PhD in NLP from Carnegie Mellon University.


Why do you think Responsible AI / ML is relevant in a company?

Today, Responsible AI (RAI) is imperative for any company embarking on its (gen)AI journey. Understanding the limitations and unintended consequences early and proactively mitigating risks is critical for any company that cares about its reputation.

This is already happening; companies are spending more time evaluating gen AI systems than creating them in the first place. And it makes sense, LLMs are very powerful, but they are also non-deterministic, and thus their output is often not predictable. Companies need to take additional actions to ensure genAI evaluation as well as its proper steering and control.

One year ago you decided to create alinia.ai. What is the purpose of the company?

Yes! When chatGPT came out, I realized this changed everything. I knew that this new era of AI needed leaders to shape the future that we all want for our children and generations to come. So I decided to put my technical expertise and professional experiences to good use.

In a nutshell, Alinia AI is committed to helping companies safely innovate and navigate this new space of GenAI. Our Alignment Platform ensures a safe and controlled deployment of gen AI, aligned with companies´  policies and their business context with specific preferences and constraints.

What services do you offer and who is the potential customer? Why companies should be using services as the ones you are offering

We offer different modules as part of our AI Alignment Platform. 

The first is an in-context business evaluation module.This allows companies to evaluate specific use cases and applications. We see this as the first step towards knowing where they are and what they need to fix to prepare for deployment. You cannot control what you cannot measure!

The second module allows customers to control and customize their genAI applications to ensure they represent the company’s values and policies.

Last but not least, the third module allows companies to deeply align and continue to steer, and fine-tune the underlying models, once they are in production.

What is the tech stack you are leveraging on to develop and deploy your solutions?

We are leveraging several open source technologies and are also building our own pieces where needed. 

We are doing our own research and benchmarking of several LLMs for different domains and tasks as well as key metrics, so that we are in a position to build our expertise into the platform and scale it out to all of our customers.

What do you think is the main challenge for solutions such as alinia.ai to be deployed? 

The gen AI space (and even enterprise AI still!) is such a new space that companies are still scratching their heads around what tools/platforms do they need and how to leverage them efficiently. Thus, for a solution as Alinia to take off we have to invest time in educational efforts with companies, making them understand that we are enablers for deployment, and that gen AI is not this magical harmless tool that will do the work for them. However, even though it might seem obvious to us, it is not that obvious for legacy enterprise, they need guidance, and the gen AI enterprise market is still at an embrionary stage.

Many companies are exploring Generative AI through their default cloud provider (AWS, Google, Azure). What’s your view on how to embrace Responsible AI when you are leveraging third parties?

Being very pragmatic, I would say start with something small but meaningful, that has the potential to impact your ROI or other business KPI. Measure your baseline (how you are doing now) with meaningful metrics that reflect the how well your new system is performing and then continue to measure your progress along this journey, because this is a journey. You will want to continue to monitor your AI systems once in production to ensure there is no model drifting and that it is not accidentally creating harmful or inappropriate content.

This sounds simple, but in reality this is not easy to do right now, due to the lack of customizable frameworks and toolings to support each one of these steps. This is what motivates us at Alinia, our goal is to become company’s AI partners and support them through this journey.

What do you think about EU AI act? 

We welcome good legislation around AI, including generative AI.

The EU AI Act looks to regulate the use of these technologies and classified them based on the level of risk that they pose to uses. This is paramount. It is key that we are looking out for users and do as much as we can to prevent unintended harms.

Looking ahead, what are the biggest changes we will see in the AI field in the next 5 years? 

As I shared in part 1, key technical breakthroughs have been piling on each other over the last decade. And there is now an explosion of research in the space that will surely lead to more technical breakthroughs and significant progress towards making AI systems even more powerful and hopefully safer. The biggest challenge will likely remain how to best align these systems and how to make them most useful to us, both in the enterprise context as well as for society as a whole.

A few predictions:

  • Highly personalized AI assistants will improve daily life management, becoming indispensable tools for users.

  • Automation of complex tasks that currently require human intelligence and reasoning will continue and increase, leading to unprecedented efficiency and innovation. While some jobs will surely disappear, new job roles focused on AI oversight and maintenance will be created.

  • Continued advancements in NLP will lead to AI models with even greater fluency and understanding of human languages. This will revolutionize customer service, content creation, and multilingual communication.

  • Increased focus on ethical AI practices and the implementation of regulations to ensure the responsible use of AI. Establishing clear guidelines and standards will help mitigate risks associated with AI deployment, promoting trust and safety in AI applications.

  • AI-driven healthcare solutions, including diagnostics, personalized medicine, drug discovery, and robotic surgery, will improve patient outcomes, reduce healthcare costs, and enable more accurate and early diagnoses, transforming the healthcare industry.

  • AI will also play a crucial role in addressing climate change and promoting sustainability. Applications will include optimizing energy usage, predicting environmental changes, and enhancing resource management.

  • Finally, AI will transform education by providing personalized learning experiences, intelligent tutoring systems, and advanced educational analytics.

These changes will shape the future of most industries and have the potential to improve various aspects of our daily lives. We all have the responsibility to ensure that these changes are indeed positive and inclusive to everyone.

To finish the conversation, I would like to know your perspective about Gender gap in STEM. According to recent statistics, it remains significant with women making up only 28% of the STEM workforce. Why do you think this is happening?

STEM has historically been male dominated, which means women entering technical fields are almost always a minority both in academy and industry. Being a minority means that you are always more isolated and have less of a professional support network, which affects your advancement and promotion opportunities, not to mention the bias that you are always pushing up against. 

So it is not hard to understand why women can become demotivated and feel it is not worth it. The system is often working against women in STEM and they find themselves having to work double has hard and sometimes get half the recognition. It is not fair, and it is certainly not for everyone. When you are thinking of starting a family, for example, this all can be too much to stomach and, for many, it is simply not where they want to spend their energy.

Another technical women executive and I were reflecting recently that to be a succesful woman leader in tech you almost by definition have to be hyperactive and have more energy than anyone else around you. Otherwise, you just ran out of energy before you get there.

If you’re interested in what I shared in the past on this topic, you might be interested in my interview for Pioneer.

And what can we do about it? 

See above. I talk about it during my Pioneer interview ;-)

P.D: you can read first part of the conversation here

Anterior
Anterior

Conversation with Sofia Kosenko

Siguiente
Siguiente

Conversation with Ariadna Font (Part 1)