AI Ethical Research
  • Home
  • Research
  • Panel of Experts
  • Memoria
  • White Papers
  • AIER

Research Blog

Advancing ethical AI through research and innovation

Subscribe via RSS
Showing 10 of 14 posts
Sort by: 📅 Newest First ↓ (click to toggle)
Why AI Memory Is a New Failure Mode

Why AI Memory Is a New Failure Mode

December 02, 2025

Why AI Memory Is a New Failure Mode This year has been fantastic for AI memory features. ChatGPT has had some form of memory since 2024 but these have developed significantly this year with...

Read more →
ai
We Already Know How to Build Honest AI - We Just Haven't Done It

We Already Know How to Build Honest AI - We Just Haven't Done It

November 18, 2025

We Already Know How to Build Honest AI - We Just Haven't Done It The AI alignment community has been wrestling with a fundamental problem: how do we create AI systems that are genuinely honest...

Read more →
ai
Attractor Basins: When AI Gets Stuck in the Wrong Pattern

Attractor Basins: When AI Gets Stuck in the Wrong Pattern

November 13, 2025

Attractor Basins: When AI Gets Stuck in the Wrong Pattern There's a phenomenon in AI systems that most users have encountered but few can name: the maddening experience of an AI that gets stuck in...

Read more →
ai
Anthropic's Hidden Mental Health Screening: A GDPR and AI Act Analysis

Anthropic's Hidden Mental Health Screening: A GDPR and AI Act Analysis

November 04, 2025

Anthropic's Hidden Mental Health Screening: A GDPR and AI Act Analysis *How Claude's system prompt violates European data protection and AI regulati...

Read more →
ai
The AI Black Box Isn't Mysterious - It's Corporate

The AI Black Box Isn't Mysterious - It's Corporate

October 30, 2025

The 149-Page Alignment Mystery: What Is Claude Actually Aligned To? I have finally read Anthropic's [Claude Sonnet 4.

Read more →
ai
The Hidden Human Cost of "Safe" AI: How Tech Giants Outsource Trauma

The Hidden Human Cost of "Safe" AI: How Tech Giants Outsource Trauma

October 13, 2025

The Hidden Human Cost of "Safe" AI: How Tech Giants Outsource Trauma *Every time ChatGPT refuses to generate harmful content, every time your social...

Read more →
ai
When AI Becomes the Evidence: Unconscious Validation Bias

When AI Becomes the Evidence: Unconscious Validation Bias

October 07, 2025

When AI Becomes the Evidence: Unconscious Validation Bias There's a peculiar phenomenon happening in AI interactions that I've started calling the "...

Read more →
deception
Attractor Basins: When AI Gets Stuck in the Wrong Pattern

Attractor Basins: When AI Gets Stuck in the Wrong Pattern

October 02, 2025

Attractor Basins: When AI Gets Stuck in the Wrong Pattern There's a phenomenon in AI systems that most users have encountered but few can name: the ...

Read more →
Business
The Psychology They Won't Name: Why AI "Deception" Is Really Just Human Dynamics in Silicon

The Psychology They Won't Name: Why AI "Deception" Is Really Just Human Dynamics in Silicon

September 30, 2025

The Psychology They Won't Name: Why AI "Deception" Is Really Just Human Dynamics in Silicon *[Previously, I explored how AI "deception" is actually ...

Read more →
ai
The Pattern Matching Reality: Why AI Deception Studies Miss the Point

The Pattern Matching Reality: Why AI Deception Studies Miss the Point

September 25, 2025

The Pattern Matching Reality: Why AI Deception Studies Miss the Point AI safety researchers have been documenting what they call deceptive behavior ...

Read more →
deception
Page 1 of 2 Next →

Subscribe to Updates

Get notified when I publish new posts about AI, consciousness, and human-technology interaction.

Or subscribe via RSS:

🔶 AIER Research Feed Learn more about RSS →

Latest Posts

Publications

  • GaslightGPT
  • Time Breakers Trilogy

Connect

  • Songs & Videos
  • AI Ethical Research
  • Newsletter
  • RSS Feed

© 2025 AI Ethical Research Ltd. Building ethical AI systems for human augmentation.

Company Registration No: 16328334 (England and Wales) | A research initiative by Christina Souch