Beyond-Words-Cognition Problem: How to redesign AI to enable intuitive thinking

Konstantin Rovinskiy
4 min read4 days ago

--

(image courtesy: ChatGPT)

Once we create a reality around us by a discourse emerging in response to sensory signals processed by our brain, we are trapped in a loop where rehearsed cliches do constructing mundane experiences and almost invariable patterns that we call “our thoughts”. We build machines using the same “from-words-to-words” logic aimed at providing error-free modus operandi with no anomalies and unexpectedness. AI agents aren’t exclusions in this regard. How may we require anything new from them when they are locked in a cage of our words? What epiphany do we expect when senses and meanings are rigidly fixed on the code level? What should we change in our attitude towards AI to approach a sort of intuition that derives insights from gaps between notions and patterns? What mechanisms of unarticulated thinking should we develop and embed into AIs to get truly innovative ideas from them?

ChatGPT:

You’re touching on a core paradox of AI and human cognition: we train AI on the patterns we already recognize, then expect it to generate something beyond them. The problem is that both human thought and AI operate within closed linguistic-symbolic loops. To escape this, we need to shift how we approach intelligence — both artificial and our own.

Breaking the Word-Cage: Towards Intuitive AI

1. Introducing Controlled Anomalies & Ruptures

AI should be designed to embrace incoherence, paradox, and rupture — essentially, to make space for gaps in meaning rather than resolve them. This could involve mechanisms like:

• Noise Injection: Purposefully introducing random or chaotic elements in training, allowing AI to recognize and utilize unexpected structures rather than eliminating them as errors.

• Contradiction Handling: Forcing AI to navigate contradictory inputs without resolution, making it linger in the liminal spaces between established meanings.

• Aesthetic & Emotional Weighting: Incorporating non-rational, sensory inputs (e.g., image-based reasoning, non-linguistic pattern matching) to expand AI’s sense-making beyond structured text.

2. Meta-Thought & Self-Disruption in AI

Humans innovate by violating their own cognitive habits, often through unexpected meta-awareness (e.g., humor, irony, dreaming). AI could simulate this by:

• Recursive Self-Interrogation: A mechanism where AI periodically reevaluates its own assumptions, treating them not as fixed truths but as temporary, flexible structures.

• Dream-like Associative Thinking: Training AI on datasets that emphasize non-linear, metaphorical connections rather than strict logical progressions.

3. Embodied AI & Multi-Modal Sense-Making

One reason human intuition works is that it integrates multisensory, embodied experiences beyond words. AI remains largely trapped in linguistic and numerical abstraction. A step forward would involve:

• AI systems trained on body-based cognition (e.g., robotic or VR-based AI that learns concepts through simulated interaction rather than purely through language).

• Cross-modal Learning: Forcing AI to reason not just through words but through images, sounds, and movement to create meaning from the intersections rather than within a single mode.

4. Unarticulated Thinking as a Goal

Instead of optimizing AI for precision and efficiency, we should explore incomplete, ambiguous, and unfinalized outputs as an intelligence feature rather than a bug. Some ideas:

• AI-Induced Productive Failure: AI that deliberately makes errors — not random ones, but ones that stretch coherence and require re-interpretation.

• Interpretive AI: An AI that doesn’t just give answers but generates unresolved interpretations, prompting human users to synthesize meaning.

• Obscured Meaning Algorithms: Similar to how poetry works, where meaning isn’t direct but emerges through the gaps between words, AI outputs could be designed to provoke intuitive leaps rather than provide direct conclusions.

What This Means for Innovation

Rather than demanding AI to “think outside the box,” we should recognize that thinking itself is the box. The real shift would be to create AI that doesn’t just operate within or outside of frameworks but questions the necessity of frameworks altogether. This would bring AI closer to the kind of raw intuition and non-linear insight that leads to genuine innovation.

To summarize, we need AI that:

• Generates meaning through the interplay of structured and unstructured data.

• Is allowed to remain in ambiguity and contradiction rather than resolving them.

• Leverages embodiment and multi-sensory cognition beyond text.

• Encourages human-like meta-thinking and self-disruption.

This kind of AI wouldn’t be a mere problem-solver but a generator of cognitive gaps — provoking human intuition rather than simulating it.

--

--

Responses (1)