One question can put you back in control of any AI solution
Part 3 of “AI is Coming, Now What Do I Do?”
A waiter in a cafe, a new co-worker, a bypasser asking you for a lighter — when you interact with strangers, societal rules allow you to have a fundamental estimation of their motivation, making it safer and more likely to benefit the parties involved. There is now a new type of stranger that you will have to interact with — AI models. You can undoubtedly pretend they are not there or just let them run the show, or — you learn to benefit by figuring out their motivation. Starting from your social media feed to navigation apps, your favorite e-commerce platform, and possibly even a suggested medical diagnosis. There will be an increasing number of AI solutions around you, making decisions, recommending, andclassifying. Sometimes, you will know about it; sometimes, you won’t. If there’s one way to regain agency over your life in the AI age, ask, “What is this tool optimizing for?”.
It’s a pretty good question to ask yourself, too, but in the upcoming years, it should take center stage and be the default part of any model’s intro card. The absolute majority of modern AI solutions have a single objective that was picked by the creators when the model was trained — maximizing engagement, minimizing predicted error, or enhancing the “relevance and coherence” of an output dialogue (often, there is more than one point of optimization or a complex combination of models; for now, we will simplify). How exactly does this “motivation” affect you as a user?
Navigation Apps (Google Maps, Waze) — Shortest Time
prioritize getting users to their destination quickly. While efficient, this metric can overlook user preferences for scenic routes, routes with fewer stops, or even safer paths with better lighting or pedestrian lanes.
Language learning apps — Engagement
Duolingo prioritizes metrics like “daily practice streaks” to keep users engaged, which doesn’t necessarily correlate with actual language fluency. It kind of answers the question of why no one ever learned a language with those apps — because it is not their motivation. This gamified metric helps ensure consistent app engagement and keeps users returning, increasing their likelihood to upgrade to a premium account for streak “protection” or ad-free experiences.
Dating apps — Engagement..again
Instead of purely optimizing for matches, Tinder uses engagement metrics like “swipe probability” to predict which profiles a user is most likely to engage with by swiping right. This is meant to maximize user interaction and time spent on the app, increasing the likelihood that users will return. Prioritizing engagement over successful matches creates a high level of in-app activity, which benefits Tinder by keeping users interested and more likely to subscribe to premium features for better matches.
Loan Classifiers — accuracy
almost always, a decision to grant you a loan is first run through a classifier to assess a borrower’s risk profile. Money lenders don’t like losing money, so they prioritize accuracy over sheer volume or engagement.
I don’t know if you notice, but almost no widely used AI-powered products optimize your well-being, healthy habits, etc. One can argue that companies that chose consumer benefits weren’t economically successful (or didn’t get funded), ran out of money, and thus didn’t make the cut. Another strong argument is that it’s not the responsibility of the commercial sector to take care of you as a consumer. And that’s where the legislation should jump in, but we can observe almost all governments struggling. We can (and generally should) talk long and hard about the responsibility of the legislators and businesses, but I would like to focus on two aspects.
Aspect #1 Definitions of good
Even if everyone wanted to do the right thing — it’s almost always freakishly hard to “quantify” a healthy outcome. Gambling companies in the UK, for example, are obliged by the government to screen their users for “unhealthy gambling habits” and block them from further abuse. Well, the thing is — there is no clear way to define those habits. No definition — no labels, no labels — no training data, no training data — you got the gist. But we could start small — let’s just imagine that somehow we collectively forced existing companies to change the “motivation”:
Healthy Choice Meal Delivery
Wolt suddenly optimizes for “nutritional density per calorie” instead of just repeat orders or time-saving. The app recommends nutritious meals based on a user’s dietary preferences, enhancing health outcomes. Would it destroy their business? While more complex, this metric could attract health-focused customers, build a loyal user base, and reduce churn. For companies, this would attract a premium customer segment willing to pay for high-quality, health-oriented offerings.
From “Total Transaction Volume” to Customer Referral Rate
Wise (formerly TransferWise)typically tracks transaction volume and fees earned, especially for international transfers. However, it could focus on how many customers refer new users, indicating high satisfaction with its low-cost, transparent service. Would it destroy their business? Emphasizing referrals would drive organic growth and strengthen Wise’s reputation as a transparent service, aligning with its branding around “fair” currency exchange. Satisfied customers who refer others are likely to stay loyal, increasing the lifetime value of each user.
Content Discovery Depth as new engagement
Instead of tracking likes or ad clicks, Instagram could prioritize how often users discover new interests or creators who inspire them. Would it destroy their business? Instagram could foster more meaningful engagement by enhancing authentic discovery over endless scrolling. Users would feel more connected to new content, reducing fatigue and increasing loyalty, with potential indirect benefits to advertiser reach and engagement.
AI systems don’t inherently understand why they’re optimizing for a particular goal . They simply follow the rules set by humans who define what success looks like. By participating in the definition of desirable metrics and successful outcomes, we can steer AI development towards better outcomes.
Aspect #2 Human nature
Now, to the doomy-gloomy part — do we actually want to have healthy outcomes? Would we not get bored with the apps that optimize our well-being, financial stability, and healthy habits? The reason why venture capitalists of this world are not forcing their portfolio companies to switch to “sustainable purchase solutions made” from “volume of impulsive buys” is because they don’t register strong enough demand. Public outcry — yes, government pressure, yes, but votes in dollars show — we choose flashing lights, funny videos, and exciting new devices. But whatever you choose — the important part is to be aware of this choice; it opens up a bonus opportunity…
Conclusion
Asking the right question is a critical skill going forward. By knowing the “why” behind AI’s optimization metrics, we shift from passive consumption to intentional use. As always, control comes with the cost of paying extra attention. Before using a new tool, ask: how might this optimization metric affect my experience?
Especially if you are a business that purchases solutions for your employees — you have more leverage than individual consumers to tailor the output to your needs. As users, our curiosity and awareness can hold companies accountable, encouraging a balanced approach that respects innovation and individual agency.
But that’s not all; the bonus opportunity mentioned above — circumventing our human weaknesses using AI. The final chapter, Part 4 of “AI is Coming, Now What Do I Do?” will cover the potential approach to overcoming inherited human bias in work, governance, and other aspects of life.