Wednesday, May 27, 2020

Pattern of Omission: Could an AI pick up on it?

I have seen a similar pattern happening in the media in many science related fields. In some cases it's actual researchers apparently committing these acts. The most obvious case right now is hydroxychloroquine. This was a safe drug, there is research into it for coronaviruses in the past. Zinc and azithromycin are part of the original protocol because of the mechanism via which scientist think it stops viral replication.

The pattern of omission here is there is usually no mention of zinc, sometimes no mention of azithromycin, and definitely no mention of the proposed mechanism. I haven't heard any mention of ionophores and how they would help zinc get into cells in order to stop viral replication from any mainstream source.

This isn't new. Nutritional research is rife with this sort of thing. Allegedly low carb diet studies, where, for instance the rat chow they used isn't remotely low carb enough to engender ketosis. The questionable nature of using rats in the first place. I suppose it's not quite as bad as poisoning rabbits with cholesterol- something rabbits wouldn't ever get in their diet normally- and then declaring cholesterol bad for humans, but it's pretty bad.

But, in any case, it seems to me these things might be trackable in some way. It's not easy for a program to see the omission, but we probably have enough examples of the pattern where a program could guess the pattern is re-occurring. Feed it the good examples of hydroxychloroquine being explained appropriately, and then the bad examples. Same thing with other clear examples of omission. Then let it crawl the web and try it out on many things. I think it's possible because I think there's more than just the omission itself. There are probably word choices, grammar, maybe even simple things- like author names- that could help a program reliably categorize whether there is a pattern of omission.

There's the crassly simpler thing to do- figure out what Silicon Valley is suppressing. This is probably good on it's own in a certain sense, but unfortunately, it is not enough for what I want, which is to understand what is true. Or at least have a more accurate map of the territory than what they want to provide.

But it is clear there are patterns. I suspect Silicon Valley has noticed this, and I think they go with these big name companies as the fact-checkers, or as authoritative voices, because it's easier to keep the narrative front and center that way. An A.I. would have no particular regard for the narrative, nor does the narrative remain the same in any sane way for it to be a standard to measure against. Indeed, I remember finding people in the field were also into making language gender nuetral- so I suspect they were attempting to get the result they want, but in the end it wouldn't make any sense. You just hobble your program that way. It needs to sample what is and identify patterns that actually exist.

No comments: