
One question can put you back in control of any AI solution
Part 3 in “AI is Coming, Now What Do I Do?”
A waiter in a cafe, a new co-worker, a passer-by asking you for a lighter — when you interact with strangers, societal rules provide us fundamental estimations of another's motivations and their expected behaviour. This makes for generally safe and predictable encounters that will likely benefit the parties involved.
There is now a new type of stranger that you will have to interact with — the AI models. You can certainly pretend they are not there ... or just let them run the show ... _or_ you can learn to benefit from these interactions by understanding their motivation. From your social media feed to navigation apps, your favorite e-commerce platform, and possibly even a healthcare app. There will be an increasing number of AI solutions around you, making decisions, recommending, andclassifying. Sometimes, you will know about it; sometimes, you won’t. One solid way to retain agency in in the AI age is to be inquisitive — ask “what is this tool optimizing for?”.
It’s a pretty good question to ask yourself, too, in terms of your own motivations, but in the upcoming years, be sure to keep this core quesiton center stage when getting to know any new model or service. The absolute majority of modern AI solutions have a single objective that was picked by the creators when the model was trained — maximizing engagement, minimizing predicted error, or enhancing the “relevance and coherence” of an output dialogue (often, there is more than one point of optimization or a complex combination of models; for now, we will simplify). How exactly does this “motivation” affect you as a user? Generally, each optimisation made by an AI model comes with some kind of trade-off or cost.
Navigation Apps (Google Maps, Waze) — Shortest Time
These apps prioritize getting users to their destination quickly. While efficient, this metric can overlook user preferences for scenic routes, routes with fewer stops, or even safer paths with better lighting or pedestrian lanes.
Language learning apps — Engagement
Apps like Duolingo prioritize metrics like “daily practice streaks” to keep users engaged, which doesn’t necessarily correlate with actual language fluency. This hints at the answer to the question of why few users of these apps has ever successfully learned a language — because it is not the app’s motivation (what would you think of a teacher with such motivation by the way?) This gamified metric helps ensure consistent app engagement and keeps users returning, increasing their likelihood to upgrade to a premium account for streak “protection” or ad-free experiences.
Dating apps — Engagement..again
Instead of purely optimising for matches people looking for love… or otherwise, Tinder uses engagement metrics like “swipe probability” to predict which profiles a user is most likely to engage with by swiping right. This is meant not to match users necessarily, but to maximize user interaction and time spent in the application, again increasing the likelihood that users will return (so — what does it tell you about it’s aspirations for your love life?). Prioritising engagement over successful matches creates a high level of in-app activity, which benefits Tinder by keeping users interested and more likely to subscribe to premium features for better quality matches.
Loan Classifiers — risk assessment accuracy
Almost always, a decision to grant you a loan is first run through a classifier to assess a borrower’s risk profile. Traditional money lenders don’t like losing money, so they prioritize accuracy over sheer volume or engagement (non-traditional ones might have a different goal in mind..)
It might not be immediately apparent, and the examples above are a specific subset, but almost no widely used AI-powered products optimise your well-being, healthy habits, etc. One can argue that those companies that at one point optiomized for consumer benefit were less economically successful, ran out of money, or didn’t get funded at all. Another common argument against full customer centricity is that it is not the responsibility of the commercial sector to take care of you as a consumer. That juncture is typically where legislation / regulation should jump in, but we can observe almost all governments struggling on this front — especially in keeping up with tech-related advances in products. There is a need for a long and thorough discussion about the responsibilities of legislators and businesses and where these meet, but if I had to serve the bitter pill in fewer words — you are on your own. No-one will ask the right questions for you. And it’s quite unfair because the matter is not simple. I would like to offer two pieces of thought to get going with untangling this digital ball of yarn.
Aspect #1 Definitions of good
Even if everyone wanted to do the right thing — it’s almost always freakishly hard to “quantify” a healthy outcome. Gambling companies in the UK, for example, are obliged by the government to screen their users for “unhealthy gambling habits” and block (protect) them from further abuse. Well, the thing here is that there is no unanimously clear way to define those habits. Fuzzy definitions lead to no clear labels; no labels -> no training data, no training data … you get the gist. More often than not, it’s difficult to pin down ‘good’.
Imagine the following scenarios, however, where we can come up with clearer metrics through which to optimise for ‘good’. We will rework some existing, well-known businesses to gear their core models toward metrics with consumer well-being taken into consideration.
Healthy Choice Meal Delivery
Wolt (restaurant / grocery delivery service) suddenly optimises for “nutritional density per calorie” instead of simply maximising repeat orders or time-saving on deliveries. The app, in this scenario, recommends nutritious meals based on a user’s dietary preferences, enhancing health outcomes. Would it destroy their business? While more complex, this metric could attract health-focused customers, build a loyal user base, and reduce user churn. Economically speaking, this would attract a premium customer segment willing to pay for high-quality, health-oriented offerings.
From “Total Transaction Volume” to Customer Referral Rate
Wise (formerly TransferWise) tracks transaction volume and fees earned on those tranfers, especially for international transfers. It could instead, however, focus on how many customers refer new users, indicating high satisfaction with its low-cost, transparent service. Would it destroy their business? Emphasizing referrals would drive organic growth and strengthen Wise’ reputation as a trusted service, aligning with its branding around “fair” currency exchange. Satisfied customers who refer others are likely to themselves stay loyal, increasing the lifetime value of each user.
Content Discovery Depth as new engagement
Instead of tracking likes or ad clicks, Instagram could prioritise how often users discover new interests or creators who inspire them. Would it destroy their business? Instagram could foster more _meaningful_ engagement by enhancing authentic discovery rather than endless scrolling. Users would feel more connected to new content, reducing fatigue and increasing loyalty, with potential indirect benefits to advertiser targeting, reach, and engagement.
AI systems don’t have any awareness as to why they’re optimising for a particular goal. They simply follow the rules set by the humans who define what success looks like in the system’s context. By participating in the creation & establishment of desirable metrics and successful outcomes, we can steer AI development towards better, more human-centric outcomes.
Aspect #2 Human nature
Now, to the doomy-gloomy part — let’s consider a difficult question: do we as users and consumers of AI services actually want these ‘good’, healthy outcomes mentioned above? Would we not get bored with apps that optimise our well-being, financial stability, and healthy habits? The reason why the venture capitalists of this world are not forcing their portfolio companies to switch from “volume of impulsive buys” (and I am making my best educated conclusion here) is because they don’t register strong enough demand. Public outcry — yes, government pressure, yes, but votes in dollars show — we choose flashing lights, funny videos, and exciting new devices over health and altruism. But whatever you choose — the important part is to be aware of this choice; it opens up a bonus opportunity…

Conclusion
Asking the right question is a critical skill going forward. By knowing the “why” behind AI’s optimization metrics, we shift from passive consumption to intentional use. As always, control comes with the cost of paying extra attention. Before using a new tool, ask: how might this optimization metric affect my experience?
Especially if you are a business that purchases solutions for your employees — you have more leverage than individual consumers to tailor the output to your needs. As users, our curiosity and awareness can hold companies accountable, encouraging a balanced approach that respects innovation and individual agency.
But that’s not all; recall the bonus opportunity mentioned above — what might that be? The opportunity to potentially circumvent, with the help of AI, certain human weaknesses. The final chapter, Part 4 of ”Great, AI is Coming, Now What Do I Do?” will cover an approach to this — specifically in overcoming inherited human bias in work, governance, and other aspects of life that are not beneficial to us.