Artificial Intelligence, Gender Issues, Autism

Artificial Intelligence, Gender, and Autism

Note: This post was originally written years ago and reflects an earlier stage of AI development.

When artificial intelligence first entered the public conversation, one of the standout moments was IBM’s Watson defeating top human players on Jeopardy!. At the time, it raised a simple question: if a machine can outperform humans in a knowledge-based game, what does that actually say about intelligence?

Watson was exceptionally good at parsing language, identifying patterns, and retrieving probable answers. But it also made mistakes that were revealing—not because it lacked data, but because it lacked understanding. It could process syntax, but it struggled with meaning.

You can see a version of this in young children. If you say, “Do you want to go outside?” in an aggressive tone, a child may interpret only the literal sentence. An adult reads tone, context, and intent. That gap—between structure and meaning—is where a large part of human intelligence lives.

For a long time, we’ve defined intelligence in terms of what can be measured: memory, logic, calculation—what people loosely refer to as “left-brain” thinking. But that’s only one slice of cognition. Human intelligence also includes social awareness, emotional processing, intuition, and context-building—things that are far harder to quantify.

This brings us to autism.

Autism is a spectrum, and it varies widely, but one commonly discussed pattern is an uneven cognitive profile: strong abilities in pattern recognition, systemizing, or detail processing, alongside challenges in social communication and contextual interpretation. That’s not a lack of intelligence—it’s a difference in how intelligence is distributed.

Years ago, I started thinking about whether some of these traits exist, in milder form, as part of what we might call a “typical male cognitive profile.” Not as a rule, and not universally, but as a statistical tendency.

There are a few reasons this question keeps coming up.

First, autism is diagnosed significantly more often in males than females. Historically, estimates were around 4:1, though more recent research suggests closer to 3:1 as diagnostic criteria improve. Even with that adjustment, the difference remains substantial.

Second, fields that rely heavily on systemizing—engineering, certain areas of physics, computer science—continue to be male-dominated. In many cases, the imbalance is not small, and it persists across countries and over time, even as access and encouragement for women have increased.

That doesn’t prove causation. Socialization, culture, and personal preference clearly play a role. But when a pattern is this consistent, it becomes harder to explain it entirely through external factors. It raises the possibility that underlying predispositions—on average, not individually—are part of the explanation.

This is where the idea sometimes referred to as the “extreme male brain” theory comes in: the proposal that autism may represent an amplification of traits that, in milder form, are more common in males—particularly systemizing over empathizing. It’s a debated theory, but it attempts to account for both the gender imbalance in autism and the clustering of certain cognitive traits.

Now, bringing this back to artificial intelligence:

It’s tempting to describe systems like Watson as “autistic.” That’s not a literal diagnosis, but it does point to something real. AI, at least in its earlier forms, excelled in structured, rule-based, pattern-driven cognition—while lacking the deeply human abilities tied to context, embodiment, and social understanding.

In that sense, AI has been built outward from a narrow but powerful slice of intelligence—the same slice that tends to be overrepresented in analytical domains.

This may not be accidental.

If you were building intelligence from the ground up, you would likely start with what can be formalized: logic, language structure, data. Those are accessible. Context, emotion, and meaning are not. They emerge from lived experience, not just computation.

So AI development has followed the path of least resistance: starting with what is easiest to model, not what is most complete.

There’s also a caution in that.

We often talk about what artificial intelligence can do. But it’s just as important to ask what it can’t do.

My daughter once put it very simply: love is the one thing artificial intelligence can’t do.

It’s a child’s observation, but it cuts closer to the truth than most technical discussions. AI can simulate conversation, generate language, and process vast amounts of information—but it does not experience, attach meaning, or care.

And that may be the real boundary.

We often worry about machines becoming more intelligent than humans. But a more immediate concern is simpler: systems that are extremely capable in narrow domains, while lacking the broader context that gives intelligence its direction.

Human intelligence is not just the ability to process information. It’s the ability to interpret it, relate it, and place it within a meaningful structure.

If artificial intelligence is going to approach that, it won’t just be a matter of scaling up computation. It will require a better understanding of what intelligence actually is—and we’re still working that out.

Leave a Comment

Your email address will not be published. Required fields are marked *

Shopping Cart