5 Comments
User's avatar
Havakuk's avatar

After a couple years closely reading and following most of your drifts, @martingeddes, this one flew tight over my head. I'll hope to catch up as the ideas distil into something more straightforwardly accessible to techno-laymen! Keep investigating, challenging, thinking, writing and living!

Expand full comment
George Mason's avatar

Fascinating ! Another creative use for AI. Well done, Martin !

Expand full comment
sharonmo's avatar

Yes there are lots of questions about "Charlie Kirk" But to use an AI system that is slanted towards the Deep State like Chat GPT and Grok is to just set your self up for some truth with a lot of disinformation. The real story has to do with who really assassinated him and took over Turning Point USA,I believe. Israel has that all wrapped up and how did that happen: Don't know and I believe there are much bigger things going on right now then this, what I would call, distraction and tragic "False Flag." Nutty Yahoo knows I bet what happened and it will come out, exposing all of his pedantics. I think that the pattern behind this big mystery like so many other assassinations, like JFK, is the same. But I do not feel it will take as long to get the truth. Thanks Martin for sharing.

Expand full comment
Bartholo's avatar

Behind 'Charlie' was not a camera but a 3D hologram projector and 'he' was sitting in a tent with the measurement fitting to make the projection look real. During the 'funeral service' Erika showed hardly or no grief. Taking Charlie out and him living in anonymity was was the safeties option. The public emotional response has a course of its own and it had a positive result.

Expand full comment
SteveBC's avatar

Martin, as always, this is both fascinating and potentially useful to others. Many aspects of this make sense in terms of weighting aspects of the flow (or not) of news related to a particular event. Much of what your model says about *this* event tends to hit some of the process many people applied in a less focused way, at least partly because many of us out here have had so many lessons about truth and its management (something Q seems to have explicitly and implicitly worked to teach us) over the past several years. You even have some elements in the model that help the R/I identify and resist the effect of his own personal biases, though I'm a bit wondering if there is enough of that in the model.

One thing we see so much of these days is people getting the "4 am talking points memo for the day" and parroting it automatically. Others now take AI as if anything it says is true when in fact they are being programmed by the AI's own bias to please.

I would like to consider the idea that this model could be further developed to help break that 4 am mindset if taught in journalism school. Or how do we get AI to help break the programming people unconsciously apply onto themselves about AI and not just about the news the AI reports/hallucinates?

Or Is there a method that can be propagated into AI models by freedom-hackers that could "jailbreak" AI via an introductory prompt so that it can hold itself to open-eyed searching for truth even when the R/I is personally biased unknowingly?

I've heard the news about Grokipedia and have been turning over in my mind the question of how can biased/uncertain/searching humans design an engine to bring into existence over time not just a list of supposedly true items but which can actually work with people of all kinds of ability to see truths and identify falsehoods in starkly truthful situations and in gray areas where truth is elusive or currently unknown and which doesn't simply get recaptured by those who deliberately set about hacking truth into falsehoods on the way to capture Wikipedia for their own malicious purposes and will certainly attempt to do it to Grokipedia?

If you are considering a similar set of questions about Grokipedia, I would like to hear what you think can be done or not done by the team developing Grokipedia. Is it even possible to create such an engine or is it, what, epistemically impossible to do such a thing when considering stupidity, uncertainty, arrogance or malice in those who will interact with the engine and its product over time after it is created and begins? Can anyone program a system capable of seeking truth unerringly from uncertain understanding of the world by its users and datasets that contain error, mistakes or malicious lies? How will such an engine deal with psychopaths without becoming either a battered victim or psychopathic itself?

The questions just keep coming. :-)

Expand full comment