
There’s a strange new ritual taking shape in the corporate world, and most of us have already taken part in it whether we meant to or not. It goes something like this, a company receives your application, a machine scans your résumé, another machine evaluates your “digital footprint,” and an algorithm; cold, tireless, impressively unbothered by your liberal arts degree, decides whether you are worthy of human attention. Only after surviving this gauntlet of silicon gatekeepers do you earn the privilege of interacting with an actual person.
For years, job hunting resembled a two-way courtship. Applicants polished résumés. Employers reviewed them. Interviews, awkward, hopeful, palpably human, followed. Someone made a decision. It was imperfect but personal. Now, more and more companies are outsourcing their hiring to artificial intelligence, and it feels less like courting a potential employer and more like petitioning a distant oracle programmed by an intern.
We are told this is efficient, unbiased, future-proof. A neutral system scanning for skills, competencies, and patterns that humans might overlook. But increasingly, it feels eerily similar to asking social media platforms to vouch for our worthiness. Your online presence, your curated selfies, your memes, your half-forgotten posts from 2011, has become a de facto part of the application process. If your Instagram doesn’t disqualify you, maybe your LinkedIn endorsements will. “She has eight endorsements for leadership,” an algorithm might proudly note, while conveniently ignoring that they all came from former coworkers who just wanted to be polite.
The unsettling part is how quietly this shift occurred. Companies now rely on AI-powered applicant tracking systems to sift through candidates with the ruthless efficiency of a paper shredder. They scan résumés for keywords, eliminating anyone whose phrasing isn’t sufficiently optimized for machine digestion. They analyze video interviews for “microexpressions” and “vocal consistency,” as though the act of sweating through a Zoom call were some kind of psychological tell. One system even claims it can assess “cultural fit” using natural language processing, which is corporate-speak for “We want someone who speaks like us and therefore thinks like us.”
And these tools aren’t confined to low-stakes roles. Increasingly, they’re being used to filter candidates for jobs with real responsibility, leadership positions, financial oversight roles, jobs involving public trust. The irony should be enough to make your head spin: a machine is determining whether you’re responsible enough to be in charge.
Of course, AI is not inherently the villain. Used wisely, it can help reduce bias, improve efficiency, and broaden access. But too often, these hiring algorithms simply reinforce the biases of the data they are trained on. If a company historically favoured outgoing extroverts from elite universities, the AI may continue to do precisely that, except faster, at scale, and without stopping to question why all the boardrooms look eerily similar.
Even more troubling is the cultural implication: we are inching toward a world where people feel obliged to perform employability in public. Your posts must be professional but relatable. Your photos should radiate vitality but not frivolity. Your opinions must exist, but only in the safest, vaguest forms. The online self becomes another résumé one that follows you everywhere, glowing faintly behind your digital shoulder with every job you pursue.
This is not merely a matter of privacy. It’s a matter of identity. When companies rely on algorithms to hire, they aren’t simply choosing employees, they’re choosing data models that approximate people. And while the models may be consistent, they are terribly incomplete. AI can tell if you know SQL. It cannot tell if you’re thoughtful, principled, or quietly brilliant. It can detect your ability to speak confidently on camera, but it cannot detect your capacity to lead with empathy in a crisis. It can identify patterns in your work history, but it cannot grasp the context behind your choices, the sick parent you cared for, the industry that collapsed, the bold leap of leaving a stable job for one that mattered.
In the traditional job interview, flawed as it was, there remained the possibility of surprise. A candidate could charm, impress, or challenge expectations. Humanity itself could alter the outcome. Now, we are asked to present not our full selves, but our most machine-readable selves. And that should make us uneasy.
It’s not that AI should be banished from hiring. It’s that we must remain vigilant about how and where it wields power. If companies want to use algorithms as assistants, fine. But when those algorithms become the first, last, and sometimes only gatekeeper, we risk turning work into an automated caste system where only those who speak the dialect of the algorithm pass through.
The corporate world loves to speak of innovation, agility, disruption. Yet there is something deeply unimaginative about relying on machines to do the human work of judgment. It suggests a fear of complexity, an aversion to ambiguity, a preference for tidy metrics over messy humanity. But responsibility, the real kind, cannot be measured entirely by pattern-matching. Leadership cannot be identified by sentiment analysis. And trust cannot be bestowed by an algorithm.
We deserve better than being reduced to data points. We deserve to be evaluated by people who understand what it means to be one.
No comments:
Post a Comment