Discussion about this post

User's avatar
Oliver Sourbut's avatar

I appreciate this discussion a lot. Two things which stand out to me as deserving more emphasis.

First though, quickly framing 'good epistemic outcomes' as something like a product of 'people trying to understand clearly' and 'people can do that effectively'. (Of course these are interrelated, because people's willingness is obviously affected by the practicalities - more on that in point 2.)

OK, the things:

1. It looks to me like most of the object-level task of collective epistemics is the _checking_ and generally piecing together good 'secondary research' (broadly construed) (https://www.oliversourbut.net/p/a-full-epistemic-stack). i.e. looking at provenance, tracking the evidence and reasoning dependencies for a claim, proactively gathering the best arguments for and against, reasons to downweight certain testimony etc.

- Why? Almost all our information about our environment beyond our direct sensory access is mediated through highly iterated message passing, reinterpretation, aggregation, and so on - especially in the heights of science and the depths (!) of political/influence goings-on

- AI enables this (The Good) not so much (directly) by 'knowing' more or having 'more insights', but rather by hugely expanding the availability of clerical checking, tracing, and knowledge mapping work!

- You kind of talk about this in the collective epistemics discussion, but I think it warrants more

2. Most of the *overall* task of collective epistemics may be in the *motivating* i.e. having more people more of the time actually trying to understand things with accuracy, rather than retreating into one or other alternative cognitive mode

- The usual label I use for alternative cognitive modes is 'tribal cognition', where most of what's said and recounted (and even believed), especially (but not even only) about what's outside of the immediate sensory environment, is in service of building and maintaining allegiances and coalitions

- When is 'tribal cognition' incentivised? I don't fully know, but it has to do with

-- When people are/feel threatened, they reach for affiliations which offer (perhaps passing or merely apparent) security

--- Abusers can play on this by a combination of bigging up threats and presenting as a effective and sympathetic

-- When the epistemic environment is difficult true perception is more difficult and less rewarded

--- Abusers can push this. In politics: flood the zone, firehose of falsehoods, FUD. In science: p-hacking, importance-hacking, conflating/obscuring methodologies.

-- Generally adding noise and more convincing fake content undermines The Good above, the ability to check and trace, not by making people believe the fake stuff but by making them correctly recognise that it's hard to tell at all (thus 'retreat')

-- Certain coalition norms can encourage epistemic insularity and discourage (genuine) scrutiny

- I think you're touching on this in The Ugly, 'undermine sense-making'. To me it's possibly 'most of the problem'! Or at least, understanding under what conditions people mobilise one or other cognitive intents in sensemaking, and how those conditions can be influenced is a really big part of the picture here.

Nick Hounsome's avatar

I have recently come to realise that there will be a downside that covers both the positive and the negative outcomes discussed and that is the likely divergence into AI haves and have nots and the efficiency losses involved.

Good AI use is expensive and the best AI will always be much more expensive than the cheap or free AI that the masses can or will use.

In a battle for resources, those making greater use of better, more expensive, AI, will, on average, extract resources from those not using AI or using cheaper AI. This, in itself, will drive an, ever widening, inequality between the AI rich and the AI poor.

Worse, the prospective losses and gains, will turn it into a winner takes all gamble - If I spend more than you on AI then I am likely to be able to take all your stuff, but, (1) I don't know how much you are spending and (2) It is uncertain, whether or not the current contest is actually one that the current AI can win for me so we might both be spending all that money for nothing and that is a cost to society as a whole.

In aggregate, the winners will be those who gamble with AI, reinvest their winnings into more gambling, and get lucky. The losers will be those who don't play, those who gamble and lose, and society as a whole because of the hideous inefficiency

1 more comment...

No posts

Ready for more?