The Schrödinger Gun

Roger Penrose proposes in The Emperor's New Mind that consciousness depends not just on arrangements of synaptic connections, but on funky quantum effects. (He was on a panel discussing this at a symposium at Dartmouth while I was there studying cognitive science, and I got to ask him a question. Squee!) Collapse of the quantum waveform, decoherence, boundaries between the quantum and classical, that sort of thing. Apparently he explored that particular topic, specifically with regard to a hypothesis he has about microtubules, in his next book, (hang on, Googling/Wikiing the name...) Shadows of the Mind. I haven't read it.

Suppose he's right. Suppose consciousness relies on something like quantum computing. Well, then, decoherence would interfere with it, wouldn't it? If you could figure out a way to "observe" the essential process or feature, you could disrupt someone's consciousness.

It's too bad we have no actual clue what "observe" means. Either theoretically or practically. How do you open the box on the cat? Can we mathematically define what constitutes opening the box? Penrose touches on this; he proposes that maybe it's something like, an event is observed when it's causally connected to a one-graviton outcome. He makes it clear that he's just waving his hands, though.

What if we could causally connect consciousness to a one-graviton outcome? Something like a photomultiplier for the soul. A cascade resulting from detection of some aspect of cognition.

Like say we discover that the quantummy activity in, oh heck, let's just go with the flow and say microtubules, sometimes emits neutrinos or something. Or when consciousness is happening, it has a different probability of emitting neutrinos. I dunno, all you need is something observable. Yeah, I know what you're going to say, but just for the sake of argument let's suppose we figure out a way to detect neutrinos without a coal mine full of ultrapure water. I watch PBS sometimes, too, you know.

So there you go. Point some appropriate kind of detector at somebody's head, and disrupt their consciousness. Just like the detector in the two-slit experiment, but useful.

Wouldn't that make a rockin' weapon?


Meme Grenade

I'm shy, in a peculiar kind of way, about communicating with people I don't know. There's a threshold, and only certain situations will get me over that threshold. If I have something I know to say, for instance, and I can convince myself it's relevant to the conversation, I can usually open my mouth. That's rare, though, and tough, and I'm afraid in my efforts to improve my social skills I may have somewhat degraded my criteria for relevance.

Another behavior which I often exhibit is what I call the meme grenade. I do this both in person and online. I'll toss an utterance of a few words into the group, carefully constructed to catch in people's heads and stimulate thought and conversation. Often, I'll then withdraw a bit, since the grenade was all I had to offer. More often than not, it's a dud, but it goes off frequently enough that I get reinforced for doing it. There's a little bit of glee I derive when, for instance, a thread or subthread discussion I initiate snowballs into a weeklong conversation.

There's a little subvocal exclamation in my head when I pull the pin. Translated to words, it might be, "Fire in the hole!"

I'm a visual person. There's a part of me that attributes synchronicity or something to the faint resemblance between an old-fashioned pineapple grenade and the capsid of a virus.


The Powers That Be

So, we just got to see the OLPC talk at PyCon. It was given by the fellow who handed out the first production OLPC.

And I've got my usual sci-fi feel about the OLPC thing, like it's going to wake the Overmind and such. And I don't know how silly that thought is. Even the most extreme position I've got in my head thinks that there's less than an even chance that this thing will actually happen; there's a great risk that it'll either fizzle on its own, or be stomped out by the powers that be.

I mean, how subversive can you get? How futurist can you get? Mesh networks. Pedagogy, economics, memetics.

One sad note, in that extreme position, is that if the big stuff happens, it will also include some bad shit. Seriously bad shit. Mass roundings up of XOs. Quashed uprisings by children. I don't want to think too much about it, but if you're starting to get really ugly images, you're heading in the right direction.

But one hopeful note is that the heroes of these events won't be unsung. The revolution will be televised.

And not so much broadcast as sowed. It's decentralized. Peer-to-peer. Mesh. The fundamental architecture of the OLPC project is exactly the kind of stuff that's least suppressible by tyrannical governments. In fact, it's superbly engineered countertyrranical technology. Someone's been doing some big-picture systems thinking.

I spent some time last night playing a trial of a game called GalCon. It's a resource-acquisition galactic conquest game. Implemented in Python, natch. I sucked at it, as I do at games, but after a little practice I began to hold my own against the Practice level of the bots. It reinforced a lesson I'd learned in Settlers of Catan: Grab resources fast. The importance of a given second in the game follows something like a hyperbola; no second is nearly as important as the one before it.

So taking that lesson and looking at OLPC, I'm happy that this fellow was talking about handing out a quarter million laptops in this first go. Sure, more faster would be better. But this might be enough.

Let's hope the powers that be don't yet grok graph theory.


An Outsider Visits PyCon

I'm at Guido's keynote.

Totally surrounded by Pythonistas.

As usual, I'm the odd man out.

The staff tees have an xkcd panel about python on the back.

Aaaaaaand the wifi's not working.

So here's a great example of how I think. Just now, an irritating buzzing noise intruded on Guido's keynote. He was going over some of the cool new features coming in Py3k. Somebody closed a door or something, and it went away. It wasn't particularly loud. Not loud enough that Guido had to stop talking. But for one or two sentences, radically fewer of the people present were paying full attention to what Guido was saying. I know I couldn't focus on the memes he was trying to transmit. I'm pretty sure I did not grab any important information out of that part of the meme stream. How many other people didn't?

And what effect will that have on the future? This is a pretty important conference. Imagine a graph of pythonistas, showing who has lots of influence, who acts as a maven, etc. I'll bet, even if 1000 (that's how many are here) is a tiny percentage of all the people using Python, that more than half of the... I don't know the graph-theory term; mass or something... more than half of the total mass of the pythonista influence graph is here in this room. So, suppose uptake of a particular new feature had a big impact of the future of Python development (and, let's face it, on the future of computing as a whole, and thus probably on the future of humanity, and heck, let's be bold, the universe; if you don't believe me, I ask you, how different would your laptop be if its ancestors had used something other than punched cards?) then what memes got into the heads of these people here, especially during that particular slide, is a relatively big knot in the grand tapestry.

If I were some benevolent or malevolent entity (or even a neutral entity) trying to command history for good or evil (or for the heck of it) the repertoire of tools I'd use would probably include making a buzzing noise, not too loud, during the New Features slide of Guido van Rossum's keynote at PyCon 2008. From my subjective vantage, I have no clue whether that event had big import or almost none. But if I hop into the simulator part of my brain and try out an objective vantage for a moment, I imagine that if some real-darn-smart-but-not-omniscient being were nudging history, that fact would manifest itself to my subjective vantage as that buzzing noise.

Ah, crap, is that just paranoia?

I suppose, given the just-because-you're-paranoid rule, that that question is irrelevant. The real question should be: Does this observation give us anything? What could I do with it? What could anyone do with it?

How do I know what's really important?

Maybe the answer is not something that's available to my subjective vantage. Ever.



When I wrote about epistemic ethics, I talked about an attitude. The attitude which underlies the scientific method. The tendency to seek cognitive techniques which bring us closer to truth. I wanted to talk more about that attitude. I wanted to complain that we needed it to show up in more contexts than just the structure and activities of the scientific community.

But it's tough to talk about that attitude. Awkward. Hard to explain. Hard to refer to. I needed a word for it.

I couldn't find the right one. I decided it didn't exist.

So I made one up.

Aletheia was the Greek word for truth. It comes from lethe (λήθη) which means obscurity, concealment, or forgetfulness. The prefix a- (ἀ–) means not. Aletheia means not concealed.

Tropos (τρόπος) means turning. I remember in high school biology being delighted to learn the word heliotropism. It's just a neat word. It refers to the habit of some plants to turn to face the sun.

Alethetropy, then, means a tendency to turn toward the truth, or the act of facing the truth.

It turns out that I'm not the first to follow this etymological path; a variant, "alethetropic," is part of the title of an out-of-print book. So it's not really original. But it means something I want to say, so I'm going to start using it.

Now that I have it handy, it's a lot easier to express the following:

The scientific method is an algorithm for institutional alethetropy. What we need now are algorithms for individual alethetropy and cultural alethetropy.


I once had a conversation with an acquaintance about a television show she had seen. Her description included the phrase, "and then they did science." I asked her to explain, and she described what the people on the show had done, and I replied, "Well, that's not science." I tried to explain what I meant. I tried to explain things like falsifiable hypotheses, and attempting to eliminate bias, and the like. She didn't understand.

Science isn't men in white lab coats with bubbling beakers. Science isn't pointing shiny instruments at things. Science isn't big expensive machines making measurements of incomprehensible quantities.

Those things can happen as part of the process, but they're not what science is.

Science is the courage to seek the truth. To separate truth from untruth. To exchange what you want to be for what is.


Epistemic Ethics

It occurs to me that before I go on using these terms much more, I should write down definitions.

So, first: Epistemic Ethics. Epistemic ethics has to do with assigning moral value to how much one cares about the quality and provenance of one's knowledge.

The idea is that being skeptical (or credulous) can make you a good (or bad) person.

Let me put it in context: Nowadays you can go into a grocery store, and buy broccoli that the merchant assures you was grown in a garden without pesticides. In a country with fair laws. By farmers who weren't being oppressed. That's called provenance. How did it get to you, where did it come from?

Implicitly, you're a bad person if you don't care where your broccoli came from. If you buy your broccoli at the other store, the one across the street, you get a little twinge of guilt and fear that it might have been grown in toxic waste by slaves (which isn't far from the truth, maybe, but that's not my point).

And yet there's a section in the first store, the one with the happy ethical broccoli, that sells herbal supplements for improving various complaints of physiology. Most of them say what they're for. But if they do, they also have to say "These statements have not been evaluated by the FDA." But that's not a very strong warning, is it? Face it, we've all come to think of the FDA as a lumbering inept government bureaucracy (mostly because it's true). So if they haven't gotten around to finding out whether this particular flower or root will help me go to sleep at night, who cares?

But what the warning really should say, in most cases, is, "These statements have not been evaluated by anybody."

The FDA, flawed as it is, is an attempt to put into practice a particular attitude. It's an attitude that has evolved over the past few millennia, and has proven itself to help people in getting at the truth. It shows up in philosophy, it shows up in the US constitution, and it shows up in the scientific method. It's hard to articulate, and many people have done a better job of it than I could do here, so I will only try to sum up: it's an attitude of intellectual humility, and mutual honesty. It's epistemic ethics.

When the FDA requires a pharmaceutical company to conduct rigorous double-blind clinical trials, it's attempting to enforce epistemic ethics. You can argue, with good cause, about how it's implemented and how the system has evolved, but you must admit that it's done a good job at reducing how often people can get away with making stuff up. It's introduced disincentives for people to proffer to other people junk knowledge.

Western medicine has flawed standards of proof, and questionable motives, yes. But it sure beats no standards of proof, and obvious motives.

And yet people have turned away from the attitude that brought about the FDA. They're forgetting the motivation because they're angry at the implementation. They're forgetting the moral root of the situation. They're buying products that bear the words "These statements have not been evaluated" as a badge of honor.

So, when I see a flower in a bottle, one aisle over from the morally upright broccoli, it bugs me. It bugs me that I might be living in a society where you can be a bad person for ignoring where your broccoli came from, but you get a free pass to ignore where your knowledge came from.

And that's a problem of epistemic ethics.



There is no relief from Descartes's deceiving demon. If I dig at the roots of my epistemic foundations, that is the bedrock I always find.

Given the deceiving demon, what does an individual possessed of reason take as a maxim of action? Reason itself, alone, is inert, and must be moved from without.

Even alethetropy cannot have an absolute foundation. I want to use it as my pedestal, but when I push against it, it shifts.

Alethetropy, then, must be taken as a guideline, a rule of thumb. It can serve as a moral basis, but like any other, must it be accepted on faith?