Synthetic intelligence is on everyone’s lips nowadays, sparking pleasure, worry and infinite debates. Is it a pressure for good or unhealthy – or a pressure we even have but to totally perceive? We sat down with outstanding pc scientist and AI researcher Mária Bieliková to debate these and different urgent points surrounding AI, its impression on humanity, and broader moral dilemmas and questions of belief it raises.
Congratulations on turning into the most recent laureate of the ESET Science Award. How does it really feel to win the award?
I really feel immense gratitude and happiness. Receiving the award from Emmanuelle Charpentier herself was an unimaginable expertise, full of intense feelings. This award does not simply belong to me – it belongs to all of the outstanding individuals who accompanied me on this journey. I imagine they had been all equally thrilled. In IT and in addition in applied sciences basically, outcomes are achieved by groups, not people.
I am delighted that that is the primary time the primary class of the award has gone to the sector of IT and AI. 2024 was additionally the primary 12 months the Nobel Prize was awarded for progress in AI. Actually, there have been 4 Nobel Prizes for AI-related innovations – two in Physics for Machine Studying of Neural Networks and two in Chemistry for coaching Deep Neural Networks that predict protein buildings.
And naturally, I really feel immense satisfaction for the Kempelen Institute of Clever Applied sciences, which was established 4 years in the past and now holds a steady place within the AI ecosystem of Central Europe.
A number one Slovak pc scientist, Mária Bieliková has carried out intensive analysis in human-computer interplay evaluation, consumer modelling and personalization. Her work additionally extends to the information evaluation and modelling of delinquent habits on the net, and she or he’s a outstanding voice within the public discourse about reliable AI, the unfold of disinformation, and the way AI can be utilized to fight the problem. She additionally co-founded and presently heads up the Kempelen Institute of Clever Applied sciences (KInIT), the place ESET acts as a mentor and associate. Ms. Bieliková lately received the Excellent Scientist in Slovakia class of the ESET Science Award.
Creator and historian Yuval Noah Harari has made the pithy remark that for the primary time in human historical past, nobody is aware of what the world will appear to be in 20 years or what to show in colleges immediately. As somebody deeply concerned in AI analysis, how do you envision the world 20 years from now, notably when it comes to know-how and AI? What are the talents and competencies that can as soon as be important for immediately’s kids?
The world has at all times been troublesome, unsure, and ambiguous. At this time, know-how accelerates these challenges in ways in which individuals wrestle to handle in actual time, making it exhausting to foresee the results. AI not solely helps us automate our actions and change people in numerous fields, but in addition create new buildings and artificial organisms, which may probably trigger new pandemics.
Even when we didn’t anticipate such eventualities, know-how is consciously or unconsciously used to divide teams and societies. It is not simply digital viruses aiming to paralyze infrastructure or achieve assets; it is a direct manipulation of human considering by way of propaganda unfold on the velocity of sunshine and magnitude we could not have imagined a number of a long time in the past.
I don’t know what sort of society we’ll dwell in 20 years from now or how the foundations of humanity will change. It would take longer, however we would even have the ability to alter our meritocratic system, presently primarily based on the analysis of information, in a means that doesn’t divide society. Maybe we’ll change the best way we deal with information as soon as we understand we will not totally belief our senses.
I’m satisfied that even our kids will more and more deviate from the necessity for data and evaluating success in numerous exams, together with IQ exams. Information will stay essential, however it have to be data that we are able to apply. What is going to really matter is the vitality individuals are keen to spend money on doing significant issues. That is true immediately, however we regularly underutilize this angle when discussing schooling. We nonetheless consider cognitive abilities and data regardless of understanding these competencies alone are inadequate in the actual world immediately.
I imagine that as know-how advances, our want for sturdy communities and the event of social and emotional abilities will solely develop.
As AI continues to advance, it challenges long-standing philosophical concepts about what it means to be human. Do you assume René Descartes’ remark about human exceptionalism, “I believe, subsequently I’m”, will should be re-evaluated in an period the place machines can “assume”? How far do you imagine we’re from AI programs that may push us to redefine human consciousness and intelligence?
AI programs, particularly the massive basis fashions, are revolutionizing the best way AI is utilized in society. They’re regularly bettering. Earlier than the top of 2024, OpenAI introduced new fashions, O3 and O3mini, which achieved important developments in all exams, together with the ARC-AGI benchmark that measures AI’s effectivity in buying abilities for unknown duties.
From this, one may assume that we’re near attaining Synthetic Basic Intelligence (AGI). Personally, I imagine we’re not fairly there with present know-how. Now we have wonderful programs that may help in programming sure duties, reply quite a few questions, and in lots of exams, they carry out higher than people. Nonetheless, they don’t really perceive what they’re doing. Subsequently, we can not but speak about real considering, although some reasoning behind process decision is already being accomplished by machines.
Simply as we perceive phrases like intelligence and consciousness immediately, we are able to say that AI possesses a sure stage of intelligence – that means it has the power to unravel complicated issues. Nonetheless, as of now, it lacks consciousness. Primarily based on the way it features, AI doesn’t have the aptitude to really feel and use feelings within the duties it’s given. Whether or not this may ever change, or if our understanding of those ideas will evolve, is troublesome to foretell.

The notion that “to create is human” is being more and more questioned as AI programs grow to be able to producing artwork, music, and literature. In your view, how does the rise of generative AI impression the human expertise of creativity? Does it improve or diminish our sense of id and uniqueness as creators?
At this time, we witness many debates on creativity and AI. Individuals devise numerous exams to showcase how far AI has come and the place these AI programs or fashions surpass human capabilities. AI can generate photos, music, and literature, a few of which could possibly be thought of artistic, however definitely not in the identical means as human creativity.
AI programs can and do create unique artifacts. Though they generate them from pre-existing supplies, we may nonetheless discover some really new creations. However that is not the one essential side. Why do individuals create artwork, and why do individuals watch, learn, and take heed to artwork? At its essence, artwork helps individuals discover and strengthen relationships with each other.
Artwork is an inseparable a part of our lives; with out it, our society could be very totally different. For this reason we are able to admire AI-generated music or work – AI was created by people. Nonetheless, I don’t imagine AI-generated artwork would fulfill us long-term to the identical extent as actual artwork created by people, or by people with the assist of know-how.
Simply as we develop applied sciences, we additionally search causes to dwell and to dwell meaningfully. We would dwell in a meritocracy the place we attempt to measure all the pieces, however what brings us nearer collectively and characterizes us are tales. Sure, we may generate these too, however I’m speaking in regards to the tales that we dwell.
AI analysis has seen fluctuations in progress over the a long time, however the current tempo of development – particularly in machine studying and generative AI – has stunned even many consultants. How briskly is just too quick? Do you assume this fast progress is sustainable and even fascinating? Ought to we decelerate AI innovation to raised perceive its societal impacts, or does slowing down threat stifling useful breakthroughs?
The velocity at which new fashions are rising and bettering is unprecedented. That is largely as a result of means our world features immediately – an enormous focus of wealth in non-public firms and sure elements of the world, in addition to a world race in a number of fields. AI is a big a part of these races.
To some extent, progress is determined by the exhaustion of immediately’s know-how and the event of latest approaches. How a lot can we enhance present fashions with identified strategies? To what extent will large firms share new approaches? Given the excessive price of coaching massive fashions, will we simply be observers of bettering black containers?
At current, there isn’t a steadiness between the programs humanity can create and our understanding of their results on our lives. Slowing down, given how our society works, shouldn’t be potential, in my view, with out a paradigm shift.
For this reason it’s essential to allocate assets and vitality to analysis the results of those programs and to check the fashions themselves, not simply by way of standardized exams as their creators do. For instance, on the Kempelen Institute, we analysis the abilities and willingness of fashions to generate disinformation. Just lately, we have now additionally been wanting into the era of personalised disinformation.
There’s plenty of pleasure round AI’s potential to unravel world challenges – from healthcare to local weather change. The place do you imagine the promise of AI is biggest when it comes to sensible and moral purposes? Can AI be the “technological repair” for a few of humanity’s most urgent points, or can we threat overestimating its capabilities?
AI may help us sort out essentially the most urgent points whereas concurrently creating new ones. The world is filled with paradoxes, and with AI, we see this at each flip. AI has been useful in numerous fields. Healthcare is one such space the place, with out AI, some progress – for instance, in creating new drugs – wouldn’t be potential, or we must wait for much longer. AlphaFold, which predicts the construction of proteins, has huge potential and has been used for years now.
Alternatively, AI additionally allows the creation of artificial organisms, which will be useful but in addition pose dangers resembling pandemics or different unexpected conditions.
AI assists in spreading disinformation and manipulating individuals’s ideas on points like local weather change, whereas on the identical time, it may assist individuals perceive that local weather change is actual. AI fashions can exhibit the potential penalties for our planet if we proceed on our present path. That is essential, as individuals are inclined to focus solely on short-term challenges and sometimes underestimate the seriousness of the scenario until it instantly impacts them.
Nonetheless, AI can solely assist us to the extent that we, as people, permit it to. That is the largest problem. Since AI does not perceive what it produces, it has no intentions. However individuals do.

With nice potential additionally come important dangers. Distinguished figures in tech and AI have expressed considerations about AI turning into an existential risk to humanity. How do you assume we are able to steadiness accountable AI growth with the necessity to push boundaries, all whereas avoiding alarmism?
As I discussed earlier than, the paradoxes we witness with AI are immense, elevating questions for which we have now no solutions. They pose important dangers. It is fascinating to discover the chances and limits of know-how, however then again, we’re not prepared – as people, nor as a society – for one of these automation of our abilities.
We have to make investments not less than as a lot in researching the technological impression on individuals, their considering, and their functioning as we do within the applied sciences themselves. We’d like multidisciplinary groups to collectively discover the chances of know-how and their impression on humanity.
It is as if we had been making a product with out caring in regards to the worth it brings to the buyer, who can buy it, and why. If we didn’t have a vendor, we would not promote a lot. The scenario with AI is extra critical, although. Now we have use circumstances, merchandise, and individuals who need them, however as a society, we don’t totally perceive what’s occurring once we use them. And maybe most individuals do not even need to know.
In immediately’s world world, we can not cease progress, nor can we gradual it down. It solely slows once we are saturated with outcomes and discover it exhausting to enhance, or once we run out of assets, as coaching massive AI fashions may be very costly. That’s the reason their greatest safety is researching their impression from the start of their growth and creating boundaries for his or her use. Everyone knows that it’s prohibited to drink alcohol earlier than the age of 18, or 21 in some nations, but typically with out hesitation, we permit kids to talk with AI programs, which they’ll simply liken to people and belief implicitly with out understanding the content material.
Belief in AI is a significant subject globally, with attitudes towards AI programs various broadly between cultures and areas. How can the AI analysis group assist foster belief in AI applied sciences and be certain that they’re considered as useful and reliable throughout numerous societies?
As I used to be saying, multidisciplinary analysis is important not just for discovering new potentialities and bettering AI applied sciences but in addition for evaluating their abilities, how we understand them, and their impression on people and society.
The rise of deep neural networks is altering the scientific strategies of AI and IT. Now we have synthetic programs the place the core rules are identified, however by way of scaling, they’ll develop abilities that we can not at all times clarify. As scientists and engineers, we devise methods to make sure the mandatory accuracy in particular conditions by combining numerous processes. Nonetheless, there may be nonetheless a lot we do not perceive, and we can not totally consider the properties of those fashions.
Such analysis doesn’t produce direct worth, which makes it difficult to garner voluntary assist from the non-public sector on a bigger scale. That is the place the non-public and public sectors can collaborate for the way forward for all of us.
AI regulation has struggled to maintain up with the sector’s fast developments, and but, as somebody who advocates for AI ethics and transparency, you’ve probably thought of the position of regulation in shaping the long run. How do you see AI researchers contributing to insurance policies and rules that guarantee the moral and accountable growth of AI programs? Ought to they play a extra energetic position in policymaking?
Excited about ethics in analysis is essential, not solely in analysis but in addition within the growth of merchandise. Nonetheless, it may be fairly costly as a result of it is vital that an actual want arises on the stage of essential lots. We nonetheless have to think about the dilemma of latest data acquisition versus the potential interference with the autonomy or privateness of people.
I’m satisfied {that a} good decision is feasible. The query of ethics and credibility have to be an integral a part of the event of any product or analysis from the start. On the Kempelen Institute, we have now consultants on ethics and rules who assist not solely researchers but in addition firms in evaluating the dangers related to the ethics and credibility of their merchandise.
We see that each one of us have gotten extra delicate. Philosophers and attorneys take into consideration the applied sciences and provide options that don’t eradicate the dangers, whereas scientists and engineers are asking themselves questions they hadn’t thought of earlier than.
Normally, there are nonetheless too few of those actions. Our society evaluates outcomes based totally on the variety of scientific papers produced, leaving little room for coverage advocacy. This makes it much more essential to create area for it. In recent times, in sure circles, resembling pure language processing or recommender system communities, it has grow to be normal for scientific papers to incorporate opinions on ethics as a part of the evaluation course of.
As AI researchers work towards innovation, they’re typically confronted with moral dilemmas. Have you ever encountered challenges in balancing the moral imperatives of AI growth with the necessity for scientific progress? How do you navigate these tensions, notably in your work on personalised AI programs and information privateness?
On the Kempelen Institute, it has been useful to have philosophers and attorneys concerned from the very starting, serving to us navigate these dilemmas. Now we have an ethics board, and variety of opinions is one in every of our core values.
Evidently, it’s not simple. I notably discover it problematic once we need to translate analysis outcomes into observe and encounter points with the information the mannequin was skilled on. On this regard, it’s essential to make sure transparency from the outset, so we cannot solely write a scientific paper but in addition assist firms innovate their merchandise.
Given your collaboration with massive know-how firms and organizations, resembling ESET, how essential do you assume it’s for these firms to steer by instance in selling moral AI, inclusivity, and sustainability? What position do you assume firms ought to play in shaping a future the place AI is aligned with societal values?
The Kempelen Institute was established primarily based on the collaboration of people with sturdy educational backgrounds and visionaries from a number of massive and medium-sized firms. The concept is that shaping a future the place AI aligns with societal values can’t be realized by only one group. Now we have to attach and search synergies wherever potential.
For that motive, in 2024, we organized the primary version of the AI Awards, targeted on Reliable AI. This occasion culminated on the Forbes Enterprise Fest, the place we introduced the laureate of the award – AI:Dental, a startup. In 2025 we’re efficiently persevering with the AI Awards and have obtained extra and better high quality purposes.
We began discussing the subject of AI and disinformation nearly 10 years in the past. Again then, it was extra educational, however even then, we witnessed some malicious disinformation, particularly associated to human well being. We had no concept of the immense affect this subject would ultimately have on the world. And it is solely one in every of many urgent points.
I worry that the general public sector alone has no likelihood of tackling these points with out the assistance of enormous firms, particularly immediately when AI is being utilized by politicians to achieve reputation. I think about the subject of trustworthiness in know-how, notably AI, to be as essential as different key matters in CSR. Supporting analysis on the options of AI fashions and their impression on people is prime for sustainable progress and high quality life.
Thanks on your time!