A note

Dear tech support dude with an Indian accent,

You are an intelligent person in a rising empire. I am an intelligent person in a dying empire. I have no reason to wish you ill; on the contrary, you represent the best hope for the future of the species of which I am, after all, a member. I'd like to let you know what's happening here, in hope that it will help you and your community to avoid the mistakes I and my community have made.

When I share my experiences, when I tell my story in a place where you can listen, it is, at least in part, for your benefit.

Good luck.


Don't Forget To Check Out Our Weekly Specials!

So I just went to the pharmacy. Let me talk about cognitive load. I'm directing this at the marketing executive who has been paid to design my shopping experience. I drive to your store, park, and enter, wander up and down the aisles looking for an item I need, and then try to buy it. Let me confess up front: I've been diagnosed (by the same professional and commercial Western Medicine System which you are in a very real sense a part of) with ADD (back when that was one of the things it was called; I know it's changed; the DSM be praised). As I attempt to carry out my intended tasks in your store, I am bombarded with words and images, visually and auditorily. Every square inch of the surfaces that surround me is emblazoned with advertisement, from the packages of the products themselves, to the logos and faces on the sides of the endcaps, to the stands holding flyers, which I try not to knock over. Voices, NON-STOP VOICES entreat me over the PA to check out your weekly specials. At every spot where I can interact with a human being acting as a representative of your corporation, the face I see has a halo extending to the edge of my vision consisting of candy to give me diabetes or cancer interspersed with pictures of scantily clad celebrities. As I shop, little plastic boxes protrude from the shelves, and blink at me, offering pieces of paper which can grant me infinitesimal discounts on products I don't need. At the register I'm told I need a Special Swell Customer Card so that, in addition to getting more infinitesimal discounts and letting you track the progress of my hemorrhoid, I can devote precious space in my already overstuffed wallet to what is, in essence, a billboard for your corporation.

And in among the logos and the slogans and the entreaties and the coupons and the Special Swell Customer Card and the acres of glossy cleavage, there is a VERY REAL risk I will FORGET what it is I came into your store to buy.

If that happens, it will defeat the purpose of my having offered this hour of my life to doing business with you.

My (justified) fear of that eventuality makes my shopping experience, frankly, hellish.

So. You, marketing executive, have been paid what is probably a pretty respectable salary to lure me into your store and make me into a loyal customer. And the result of your earnest labors is that the brightly-colored sign above your door might as well bear the words, "Abandon all hope, ye who enter here."

Keep up the good work.


"Oh, and stop drinking tea."

This was said by my urologist this morning after she showed me the CT scan of the proto-boulder in my kidney.



Have you ever eaten a perfect peach? No, really, have you? Are you sure? Most people are lucky if once in a lifetime they eat a perfect peach.

I have a magic power. I have contrived, in the past week, to eat over one dozen perfect peaches. In among a life of cars and cubicles, gas prices and taxes, I have included this superhuman accomplishment.

And I did not do it emptily. I did not arrange it and then let it pass unheeded. To each bite I paid rapt attention, and discerned with intense concentration the papery flat of the outer fuzz, the sharp tartness of the inner surface of the skin, the miraculous flesh which achieves the feat of being at once uniform and smooth and yet delicately fibrous. I have examined the mechanics of the pressure of my front teeth against the nestled stone, and the flow of the juice from the compressed flesh surrounding it. And that is the magic: not that I have eaten perfect peaches, but that I have done so mindfully.

And I send you this secret message, this missive which no other could comprehend, to inform you that this magic power belongs to you, too. Yes, I mean you.

Use it wisely.



I can't make promises. Promises aren't how the universe works. But I can tell you that given what I know, I assess a high probability that there is more good stuff to come.


Which Cognitive Vice Is This?

I have a password that was generated for me by a password program. It's reasonably secure; it's a pseudorandom set of mixed case letters and numbers. It's one of many like that; I generate a new one for each context. It contains, in the middle, by chance, a sequence of four letters that are the same as the name of a minor character in a lesser-known story by my favorite author, but with the vowels taken out. And only if you don't count y as a vowel.

It's an accident. And not a very interesting one, at that. And yet every time I type it, I think of it as an homage to great literature, and hope the writing gods see it as such.


More Eschatology

The Bafflement is the period of history between the moment a civilization discovers the facts of quantum mechanics, and the moment that civilization understands the facts of quantum mechanics.

Few civilizations survive the End of the Bafflement.

Our civilization is in the Bafflement now.

The End is nigh.


The Doubularity Is Nigh

Huh. Randall Munroe and Cory Doctorow met in person over a week ago, and the Internet didn't explode, or invert, or become the Virtù. So we need to come up with some new candidates for what event will trigger the Geekpocalypse. My vote is for when Randall and Summer Glau start dating.

They'd make a cute couple, wouldn't they? And she could use her mad ballerina skills to defend him from the pixel-eating bands of LOLzombies which will roam the Afternet.


My Cognitive Vices: Penicillin Thinking

Sometimes I justify my mistakes by telling myself that they might result in serendipitous discoveries. Like when Fleming accidentally contaminated a culture plate with mold, and discovered penicillin. I call that penicillin thinking.


Fake physics

Attention comes from points with nonzero curl in the noetic flux.

What's the wavelength of an attention? (uh-TEN-tee-ON)



Do you suppose that Newton had a buddy who suggested to him, over coffee, "Hey, maybe you should invent a new kind of math for that. You know, some way to talk about sums of infinitesimal things." Would we hear about that guy?

I wouldn't mind being that guy, even if we didn't.



The phase-space of life is riddled with obstacles and currents. Our course through it is not simple. Bedeviling our navigation are the twin facts that we do not know our destination, and that our charts were drawn by madmen.



The idea of the salt trick (the original idea, anyway) is finding the mental trick to get myself to do the things I need to do. Algorithms for human living.

Ritual seems to be a useful tool. The tea ritual gets good results pretty often. I'm hoping it's more the ritual than the drugs in the tea (theobromine? [edit: nope, theophylline]). I don't want to be dependent on drugs.

I once applied the tea ritual to my job, and good things came of it. I'm reluctant to do that regularly, though.

My deep fear, and I think I rationally support this, too, is that if I became successful at my job, I might forget to worry about, and put effort into, things which are more important.


A bad artist who doesn't know any better will point to Picasso, or Monet, or van Gogh, to defend his bad art. You can't limit art with your narrow definitions, he'll say.

It's true; great art defies rules. This is because art is so richly complex a concept that any clean and rigorous definitions we try to come up with for it will be hopelessly naive. We're not smart enough to correctly draw boundaries for art. Every time somebody has set rules, a great artist has come along and broken them.

So we cannot set rules on art, yes?

Here's the problem: bad artists can break rules, too.

The reason we try at all to set rules is so that we can tell good stuff from crap. We're pretty sure there is a difference between good stuff and crap. So we try to decide what that difference is. When we've come up with an answer, though, it's always been wrong, or at least shortsighted.

But just because we can't codify our standards doesn't mean we should pretend we don't have any.

What do do?

Art isn't alone in this. There are many kinds of endeavor which fall into a problematic category: an endeavor that almost definitely has value, and ought to be undertaken, but the practice of which either cannot or should not be quantitatively evaluated regarding its value. Art falls into this category. So does education. So does philosophy. So do many things.

With any of these things, we end up, at any particular point in history, either trying to constrain it with ridiculous rules, or allowing it to produce huge quantities of utter crap. Sometimes we get creative and manage to do both at the same time.

I have a daydream that there exists a possible solution to this dilemma. I keep trying to figure out what it is.

I'm probably being shortsighted.

But it would be nice if we, as a civilization, knew how to talk frankly about the difference between breaking the rules because the rules are wrong, and breaking the rules because you suck.


Neologism Needed

Suppose you could come up with a field of mathematics that is to graph theory what the calculus is to arithmetic. I think that when we find a proper understanding of quantum mechanics, that new field of mathematics will be the language it's expressed in.

Look at a Feynman diagram; it's a graph.


What If

You theists say that there is an omnipotent, omnibenevolent God in charge of the Universe. The implication is that that's a good thing.

You atheists say there isn't.

What if you're both partly right?

What if there is no omnibenevolent God in charge of the universe...

...but there should be?

If that were the truth, neither one of your viewpoints would take us in the right direction.


Why You Can't Use Entanglement To Make An Ansible

I've been disappointed that I didn't grok why you couldn't use entanglement to make an ansible, so I asked around on the Viable Paradise Yahoo! Group, and got some insight. Thanks to Laura, Meredith, Calvin, and Leo.

Here's what I've got in my head:

In order to transmit information, one party must set (that is, determine) a property of the universe, and another must read (that is, measure) that same property. When each party is in possession of one particle of an entangled pair, neither party can set any entangled state of the particles (like, say, spin). One party (A) can set the property whether-the-waveform-has-collapsed-yet, by reading any of the entangled states, but the other party (B) can't read that same collapsed-yet property; B can only read the entangled state, which is something A didn't set. So even though the two particles might be said to be communicating with each other instantaneously, the two people who have them can't.

I'm not sure if I've got it right, but it feels good to think that I grok it.



A to-do list is like a sun. When enough stuff collects in one place, it can spark to life, and produce energy, and fuel interesting processes. But above a certain size and beyond a certain age, its own weight turns against it. More stuff goes in than gets out. Eventually it collapses on itself, withdraws beyond its own event horizon, and becomes irrelevant, and nothing can escape from it save in the occasional random quantum event.



There's a thought I have. I've tried to put it into words. I've tried to write it down. But every time I try, I think, "Not good enough. I'll try again later."

But I may never get it right. So I'll cut and paste what I have now:

Every concept in philosophy has a scope to which it properly belongs. A realm of discourse within which it is appropriate. Free will belongs to the subjective scope, the discussion of what it's like to be a finite imperfect being. Determinism belongs to the objective scope, the discussion of abstract and complete reality. The confusion between free will and determinism comes of failing to keep the scope of each concept in mind. That's my position. I call it contraspectivism.

There I go, making up words again.



I have ridiculed those who force their women to wear the burqa. But I am as wrong as they are: A burqa baffles only the eyes.

For a bad man, a woman modest to his eyes can be naked in his heart, and for a good man, a woman naked to his eyes can be modest in his heart.

What matters is not whether your wife is surrounded by more or less cloth, but whether she is surrounded by good or bad men.

East or west, sun or rain, any man who points at a people not his own and condemns it for the attire of its women should look not to the eyes of his fellow men, but to their hearts.


The new meme: LOLyers!

In other news, today entertainment industry lawyers brought piracy suits in federal court against three gravestones, a puppy, and a traffic light.


Making Tea Now

I want to tell you that I've changed the way I make tea since my earlier post on the subject. It's been evolving fitfully, and only reached the state I'm about to describe a few days ago.

Darn Douglas Noel Adams (DNA on h2g2) and his milk molecules.

On my desk I now have a Bodum half-liter teapress teapot, a stirring implement, a quiet kitchen timer, and a 2-cup Pyrex measuring cup. In one desk drawer I have a tin of Tealuxe Irish Breakfast looseleaf tea and a plastic teaspoon. In another drawer, I have a box of 16-ounce Solo hot beverage paper cups and lids. In the coffee nook across the cubeway from where I sit are a spring water cooler/heater, a microwave, a mini-fridge containing half-and-half and a tiny ice tray, and a little squeezy plastic jar of honey.

Here's what I do: I open the desk drawer, get out the tin and spoon, and scoop two heaping teaspoons of leaves into the teapress. I take the teapress and measuring cup to the coffee nook. I fill the Pyrex with one and three-quarters cups of water from the hot spigot, put it in the microwave, and press Quick Min and Start. I wait, worrying what leaking microwaves are doing to my gametes. Sometimes I go fetch a cup and lid during this wait, but I only have a minute and I have to be there and ready right when the microwave beeps. If I'm five seconds late, the water in the Pyrex has stopped bubbling, and Douglas tells me, "the water has to be boiling (not boiled) when it hits the tea leaves." So when the microwave beeps, I take out the Pyrex promptly, pull out the teapress's steeper insert ('cause otherwise the water flows out through the pores and carries particulates with it), pour in the bubbling water, and quickly lower the steeper back in. I take the teapress (caution: hot!) and Pyrex (also hot!) back to my desk and set them on my tea napkin. The tea napkin is just a folded paper towel dedicated to the purpose of absorbing drips, not a specially-purchased product. I'm not sick. I start the timer, which is programmed for 4 minutes and 40 seconds. Sometimes I put the press part of the teapress in, sometimes not; it sinks and thus cuts the number of open pores through which tea liquor can flow (or through which tea particles can diffuse; I guess both things are happening), which bugs me, but then I've heard you're supposed to cover it while it steeps. While the timer counts down, I go back to the coffee nook. If I didn't bring a cup before, I do now. I squeeze about a teaspoon of honey onto the flat paper bottom of the cup. No, not upside-down, silly, right side up. I know that my phrasing might be confusing. I just say it that way because when I watch the thick honey blob spreading on the coated paper, I think of it as on a flat surface, not in a concave space. I take the cup back to my desk. When the timer goes off, I press the press part of the teapress. I pour the hot tea into the cup, onto the now spread-out honey. The tea pushes up little irregular circular ridges in the honey. I assemble my stirrer; I stick a plastic coffee-stirrer onto the end of a wooden stick that once held rock-candy, because the plastic stirrer isn't long enough. I stir the honey in until the bottom of the cup doesn't feel gooey. This involves reversing direction several times. I stick the end of the stirrer in my mouth and pull out the wood while sucking to get drops of tea out of the stirrer. Then I take the cup back to the coffee nook. No one has questioned why I make so many trips, but it's only a matter of time. I take out the ice tray and pop five or so little cubes out of it. If the cubes are reluctant to come, I use the point of the small blade of my Swiss Army knife to pry them. The top of the microwave provides a platform at just the right height that I don't have to unclip the lanyard from my belt loop. I fill the empty pockets with water from the cold spigot, and replace the tray in the freezy compartment of the fridge, carefully aligning it with the little ridges through which the coolant flows. I stir the ice cubes into the hot honeyed tea until they melt. This is enough to bring the temperature of the tea down near room temperature. Do I use the same stirrer? No. I'm too lazy or forgetful to have brought it along. So I grab one out of the cardboard box in the coffee nook, and chuck it when I'm done. Wasteful. I add half-and-half. Too much for my health, probably. Almost a quarter cup. Also, the English insist milk, not cream. Whatever. I sip. Wow, that's good. I mean, really, wow. I get what DNA was trying to convey. I return to my desk humming. I put the lid on. For some reason I don't drink immediately. It's as though I'm reluctant. Or I don't feel I deserve it yet. Or I'm not ready. If I was smart, I ate something substantial before I began this process. I mentally gird my loins, and open a blank page to type into, and I drink the tea. All of it. Usually in one go, sometimes I pause in the middle and make it two. Either way the majority of the tea flows into me in under a minute. I get high. I write. Or something; the recollection is unclear.

If you read the earlier post on tea, you'll recognize that all of this allows me what is quite possibly British Kitchen-quality tea, in an American Cubicle setting.

Now I have to clean up tea leaves.

I cry sometimes when I think that, no matter how well I succeed or how badly I fail as a writer, I will never get to meet Douglas Adams.

Monkey knuckles

In Ursula K. Le Guin's Steering the Craft, she has exercises for the writer. Exercise one is playing with the sound of words. For the full description, see the book. Here goes:

Sitting sipping tea.
Fussing fingers typing types.
Precise and neat and tidy.
Taptaptap slap ding!
Zephyrs in my frizzy flutter.
Taptaptaptap ding! Taptap.

But rustle crash! That doesn't go.
The zephyr window yawns and not a breeze comes through, but monkeys! monkeys!
Here they come!
Slap not the ding, but handy feet
scatter crumbs and toast aside.
Monkey butts upon my page. Monkey knuckles thump the butter!
Tea a stream, and dripping down.
Tapping typer mouth an O.

And then no more. The window out.
And screeching swinging bellies go.
All grinning laughs and happy hands.
Another cup to turn.

- read this aloud, with an audience

(I know the taps and dings don't match how a typewriter actually works. Bite me.)



When I was a kid, there was a Gifted And Talented Children program (or something like that) and a bunch of smart kids were invited to come to the Museum of Science on a day when it was closed so that we could listen to lectures about being smart and eat crappy box lunches. I brought my best friend along.

We skipped out of the lectures and went nuts in the empty Museum of Science. It was ours, all ours. We went down back stairwells and found empty corridors and saw disused corners with forgotten display cases that no one had looked at in years. We got to peek into the box rooms and junk drawers of the house of knowledge. Delight! And we didn't get caught!

I wanted to see places I'd never seen before. I wanted to learn secrets.

My friend just wanted to go to the tops of the stairwells and spit down the middle.


Viable Paradise!

Hey, look, Pam's unofficial index of Viable Paradise info is up! And she linked to Salt Trick! Yay!

Viable Paradise is an SF writers' workshop that happens every fall on Martha's Vineyard. I attended VP XI in 2007.

More Neutrinos!

Scientists are already building an interstellar Schrödinger gun. And they don't even know it. Super-sensitive neutrino detectors are the death of reason, I tell you.

This is the Great Filter, isn't it? Civilizations get just smart enough to disrupt cognition throughout the galaxy.

(via Slashdot)

Hey, it looks like they're making beams of antineutrinos. What happens when a beam of antineutrinos intersects with a beam of neutrinos? Do they annihilate, and release energy? That'd be neat, since they don't interact with anything on the way there. It'd be a way to deliver energy to an arbitrary point in space.


The Attentiometer

The two-slit experiment. Schrödinger's cat. Heard of 'em? If not, go read up. I'll wait.

They're important concepts. They represent the greatest mystery about reality we currently face, and I just don't hear much about them. This bugs me. The two-slit experiment demonstrates that looking at the universe changes it. But it doesn't give us an adequate description of what "looking at it" means. Schrödinger's cat was an attempt by Schrödinger to make us see how bizarre were the implications of the "observation" aspect of quantum mechanics. Remember, he intended it as a reductio ad absurdum, that is, he meant to show how silly the superposition interpretation of quantum mechanics was, by showing that it led to a cat being both dead and alive at the same time, which is absurd. But nowadays lots of physicists are perfectly happy to use it as the metaphor by which they describe reality. "Yup," they say, "until you observe it, the cat's dead and alive at the same time."

Maybe if I'd taken one more term of physics before giving up, I'd know why what I'm about to suggest is just plain wrong. I hope somebody will explain it to me, and be gentle. Here's my thought:

Observing a quantum event can change its result. If you stick a photomultiplier on one of the slits, the pattern on the film changes. Can we turn that around, and use the pattern on the film as a way to tell whether an event was observed?

Could we send people randomly into a room where the output of the photomultiplier shows on a screen? And make sure that if there's no one there the information is irretrievably lost? Like, line the room in black velvet? Or something? Then later we compare the films with records of when there was someone in the room, and (here's my hypothesis) lo and behold we can tell when someone was watching.

I dunno. Maybe if you add it all up informationally, you're always observing or you're never observing. Maybe your act of looking at the results is causally downstream of the events, and so always counts as an observation. We know already that the observation's collapse of the quantum state is extratemporal. Or maybe something like Penrose's "one graviton effect" rule holds true, and it's purely a matter of how much stuff gets bumped by the event.

But, dammit, we can see the difference in the films. Can't we? Isn't there an epistemic asymmetry there? A handle to grab on to?

Maybe there's a catch-22 guaranteeing that you can never gain utility from decoherence. They say that you can't use entanglement to make an ansible; maybe it's a similar kind of situation to that. I don't grok it; I didn't study hard enough in Differential Equations. But, if so, is there any way to rigorously prove that that's the case?

And could somebody please convince this layman, so I can stop dreaming?

Please, can we settle this question?


The Schrödinger Gun

Roger Penrose proposes in The Emperor's New Mind that consciousness depends not just on arrangements of synaptic connections, but on funky quantum effects. (He was on a panel discussing this at a symposium at Dartmouth while I was there studying cognitive science, and I got to ask him a question. Squee!) Collapse of the quantum waveform, decoherence, boundaries between the quantum and classical, that sort of thing. Apparently he explored that particular topic, specifically with regard to a hypothesis he has about microtubules, in his next book, (hang on, Googling/Wikiing the name...) Shadows of the Mind. I haven't read it.

Suppose he's right. Suppose consciousness relies on something like quantum computing. Well, then, decoherence would interfere with it, wouldn't it? If you could figure out a way to "observe" the essential process or feature, you could disrupt someone's consciousness.

It's too bad we have no actual clue what "observe" means. Either theoretically or practically. How do you open the box on the cat? Can we mathematically define what constitutes opening the box? Penrose touches on this; he proposes that maybe it's something like, an event is observed when it's causally connected to a one-graviton outcome. He makes it clear that he's just waving his hands, though.

What if we could causally connect consciousness to a one-graviton outcome? Something like a photomultiplier for the soul. A cascade resulting from detection of some aspect of cognition.

Like say we discover that the quantummy activity in, oh heck, let's just go with the flow and say microtubules, sometimes emits neutrinos or something. Or when consciousness is happening, it has a different probability of emitting neutrinos. I dunno, all you need is something observable. Yeah, I know what you're going to say, but just for the sake of argument let's suppose we figure out a way to detect neutrinos without a coal mine full of ultrapure water. I watch PBS sometimes, too, you know.

So there you go. Point some appropriate kind of detector at somebody's head, and disrupt their consciousness. Just like the detector in the two-slit experiment, but useful.

Wouldn't that make a rockin' weapon?


Meme Grenade

I'm shy, in a peculiar kind of way, about communicating with people I don't know. There's a threshold, and only certain situations will get me over that threshold. If I have something I know to say, for instance, and I can convince myself it's relevant to the conversation, I can usually open my mouth. That's rare, though, and tough, and I'm afraid in my efforts to improve my social skills I may have somewhat degraded my criteria for relevance.

Another behavior which I often exhibit is what I call the meme grenade. I do this both in person and online. I'll toss an utterance of a few words into the group, carefully constructed to catch in people's heads and stimulate thought and conversation. Often, I'll then withdraw a bit, since the grenade was all I had to offer. More often than not, it's a dud, but it goes off frequently enough that I get reinforced for doing it. There's a little bit of glee I derive when, for instance, a thread or subthread discussion I initiate snowballs into a weeklong conversation.

There's a little subvocal exclamation in my head when I pull the pin. Translated to words, it might be, "Fire in the hole!"

I'm a visual person. There's a part of me that attributes synchronicity or something to the faint resemblance between an old-fashioned pineapple grenade and the capsid of a virus.


The Powers That Be

So, we just got to see the OLPC talk at PyCon. It was given by the fellow who handed out the first production OLPC.

And I've got my usual sci-fi feel about the OLPC thing, like it's going to wake the Overmind and such. And I don't know how silly that thought is. Even the most extreme position I've got in my head thinks that there's less than an even chance that this thing will actually happen; there's a great risk that it'll either fizzle on its own, or be stomped out by the powers that be.

I mean, how subversive can you get? How futurist can you get? Mesh networks. Pedagogy, economics, memetics.

One sad note, in that extreme position, is that if the big stuff happens, it will also include some bad shit. Seriously bad shit. Mass roundings up of XOs. Quashed uprisings by children. I don't want to think too much about it, but if you're starting to get really ugly images, you're heading in the right direction.

But one hopeful note is that the heroes of these events won't be unsung. The revolution will be televised.

And not so much broadcast as sowed. It's decentralized. Peer-to-peer. Mesh. The fundamental architecture of the OLPC project is exactly the kind of stuff that's least suppressible by tyrannical governments. In fact, it's superbly engineered countertyrranical technology. Someone's been doing some big-picture systems thinking.

I spent some time last night playing a trial of a game called GalCon. It's a resource-acquisition galactic conquest game. Implemented in Python, natch. I sucked at it, as I do at games, but after a little practice I began to hold my own against the Practice level of the bots. It reinforced a lesson I'd learned in Settlers of Catan: Grab resources fast. The importance of a given second in the game follows something like a hyperbola; no second is nearly as important as the one before it.

So taking that lesson and looking at OLPC, I'm happy that this fellow was talking about handing out a quarter million laptops in this first go. Sure, more faster would be better. But this might be enough.

Let's hope the powers that be don't yet grok graph theory.


An Outsider Visits PyCon

I'm at Guido's keynote.

Totally surrounded by Pythonistas.

As usual, I'm the odd man out.

The staff tees have an xkcd panel about python on the back.

Aaaaaaand the wifi's not working.

So here's a great example of how I think. Just now, an irritating buzzing noise intruded on Guido's keynote. He was going over some of the cool new features coming in Py3k. Somebody closed a door or something, and it went away. It wasn't particularly loud. Not loud enough that Guido had to stop talking. But for one or two sentences, radically fewer of the people present were paying full attention to what Guido was saying. I know I couldn't focus on the memes he was trying to transmit. I'm pretty sure I did not grab any important information out of that part of the meme stream. How many other people didn't?

And what effect will that have on the future? This is a pretty important conference. Imagine a graph of pythonistas, showing who has lots of influence, who acts as a maven, etc. I'll bet, even if 1000 (that's how many are here) is a tiny percentage of all the people using Python, that more than half of the... I don't know the graph-theory term; mass or something... more than half of the total mass of the pythonista influence graph is here in this room. So, suppose uptake of a particular new feature had a big impact of the future of Python development (and, let's face it, on the future of computing as a whole, and thus probably on the future of humanity, and heck, let's be bold, the universe; if you don't believe me, I ask you, how different would your laptop be if its ancestors had used something other than punched cards?) then what memes got into the heads of these people here, especially during that particular slide, is a relatively big knot in the grand tapestry.

If I were some benevolent or malevolent entity (or even a neutral entity) trying to command history for good or evil (or for the heck of it) the repertoire of tools I'd use would probably include making a buzzing noise, not too loud, during the New Features slide of Guido van Rossum's keynote at PyCon 2008. From my subjective vantage, I have no clue whether that event had big import or almost none. But if I hop into the simulator part of my brain and try out an objective vantage for a moment, I imagine that if some real-darn-smart-but-not-omniscient being were nudging history, that fact would manifest itself to my subjective vantage as that buzzing noise.

Ah, crap, is that just paranoia?

I suppose, given the just-because-you're-paranoid rule, that that question is irrelevant. The real question should be: Does this observation give us anything? What could I do with it? What could anyone do with it?

How do I know what's really important?

Maybe the answer is not something that's available to my subjective vantage. Ever.



When I wrote about epistemic ethics, I talked about an attitude. The attitude which underlies the scientific method. The tendency to seek cognitive techniques which bring us closer to truth. I wanted to talk more about that attitude. I wanted to complain that we needed it to show up in more contexts than just the structure and activities of the scientific community.

But it's tough to talk about that attitude. Awkward. Hard to explain. Hard to refer to. I needed a word for it.

I couldn't find the right one. I decided it didn't exist.

So I made one up.

Aletheia was the Greek word for truth. It comes from lethe (λήθη) which means obscurity, concealment, or forgetfulness. The prefix a- (ἀ–) means not. Aletheia means not concealed.

Tropos (τρόπος) means turning. I remember in high school biology being delighted to learn the word heliotropism. It's just a neat word. It refers to the habit of some plants to turn to face the sun.

Alethetropy, then, means a tendency to turn toward the truth, or the act of facing the truth.

It turns out that I'm not the first to follow this etymological path; a variant, "alethetropic," is part of the title of an out-of-print book. So it's not really original. But it means something I want to say, so I'm going to start using it.

Now that I have it handy, it's a lot easier to express the following:

The scientific method is an algorithm for institutional alethetropy. What we need now are algorithms for individual alethetropy and cultural alethetropy.


I once had a conversation with an acquaintance about a television show she had seen. Her description included the phrase, "and then they did science." I asked her to explain, and she described what the people on the show had done, and I replied, "Well, that's not science." I tried to explain what I meant. I tried to explain things like falsifiable hypotheses, and attempting to eliminate bias, and the like. She didn't understand.

Science isn't men in white lab coats with bubbling beakers. Science isn't pointing shiny instruments at things. Science isn't big expensive machines making measurements of incomprehensible quantities.

Those things can happen as part of the process, but they're not what science is.

Science is the courage to seek the truth. To separate truth from untruth. To exchange what you want to be for what is.


Epistemic Ethics

It occurs to me that before I go on using these terms much more, I should write down definitions.

So, first: Epistemic Ethics. Epistemic ethics has to do with assigning moral value to how much one cares about the quality and provenance of one's knowledge.

The idea is that being skeptical (or credulous) can make you a good (or bad) person.

Let me put it in context: Nowadays you can go into a grocery store, and buy broccoli that the merchant assures you was grown in a garden without pesticides. In a country with fair laws. By farmers who weren't being oppressed. That's called provenance. How did it get to you, where did it come from?

Implicitly, you're a bad person if you don't care where your broccoli came from. If you buy your broccoli at the other store, the one across the street, you get a little twinge of guilt and fear that it might have been grown in toxic waste by slaves (which isn't far from the truth, maybe, but that's not my point).

And yet there's a section in the first store, the one with the happy ethical broccoli, that sells herbal supplements for improving various complaints of physiology. Most of them say what they're for. But if they do, they also have to say "These statements have not been evaluated by the FDA." But that's not a very strong warning, is it? Face it, we've all come to think of the FDA as a lumbering inept government bureaucracy (mostly because it's true). So if they haven't gotten around to finding out whether this particular flower or root will help me go to sleep at night, who cares?

But what the warning really should say, in most cases, is, "These statements have not been evaluated by anybody."

The FDA, flawed as it is, is an attempt to put into practice a particular attitude. It's an attitude that has evolved over the past few millennia, and has proven itself to help people in getting at the truth. It shows up in philosophy, it shows up in the US constitution, and it shows up in the scientific method. It's hard to articulate, and many people have done a better job of it than I could do here, so I will only try to sum up: it's an attitude of intellectual humility, and mutual honesty. It's epistemic ethics.

When the FDA requires a pharmaceutical company to conduct rigorous double-blind clinical trials, it's attempting to enforce epistemic ethics. You can argue, with good cause, about how it's implemented and how the system has evolved, but you must admit that it's done a good job at reducing how often people can get away with making stuff up. It's introduced disincentives for people to proffer to other people junk knowledge.

Western medicine has flawed standards of proof, and questionable motives, yes. But it sure beats no standards of proof, and obvious motives.

And yet people have turned away from the attitude that brought about the FDA. They're forgetting the motivation because they're angry at the implementation. They're forgetting the moral root of the situation. They're buying products that bear the words "These statements have not been evaluated" as a badge of honor.

So, when I see a flower in a bottle, one aisle over from the morally upright broccoli, it bugs me. It bugs me that I might be living in a society where you can be a bad person for ignoring where your broccoli came from, but you get a free pass to ignore where your knowledge came from.

And that's a problem of epistemic ethics.



There is no relief from Descartes's deceiving demon. If I dig at the roots of my epistemic foundations, that is the bedrock I always find.

Given the deceiving demon, what does an individual possessed of reason take as a maxim of action? Reason itself, alone, is inert, and must be moved from without.

Even alethetropy cannot have an absolute foundation. I want to use it as my pedestal, but when I push against it, it shifts.

Alethetropy, then, must be taken as a guideline, a rule of thumb. It can serve as a moral basis, but like any other, must it be accepted on faith?


Bad skeptic!

There's a problem with modern skepticism.

When somebody objects to charlatanry on the grounds that its principles are wrong, that's bad skepticism. I've seen people object to homeopathy, for instance, by pointing out that the Law Of Infinitesimals contradicts known scientific fact. Bad skeptic! No biscuit!

The scientific mindset doesn't require you to prove how something works, or why something works, but that you prove THAT it works. Double-blind placebo-controlled clinical trials (that old chestnut) do not prove that a researcher's reasoning is valid; they prove that some particular testable prediction was correct or not correct. With error bars.

Knowing WHY something is, is a very different thing from knowing THAT it is. The former is deep, and rich, and in some cases might in fact be impossible (maybe fully understanding quantum mechanics, say, takes, in some inescapable information-theory kind of way, two more square inches of cortex than you've got in your skull). The latter, though, we have a chance at, and sometimes may even get a firm grasp on.

But bad skepticism happens a lot. People who think themselves intelligent, un-gullible, alethetropic, or epistemically virtuous, often engage in bad skepticism. When they should know better.

It happens because of a deeper problem. I think it may be connected to the Myth Of Reason. We've allowed ourselves to get so cocky about being logical, that we've begun attributing to logic powers it doesn't have.

Which, if we do not correct it, will make us no better than charlatans ourselves.



This morning I saw
on Lexington Road
on the bank of a stream
by a well-manicured house
a wheelbarrow
rotting into the ground under a pile of brush.

We none of us can maintain our worlds
keep them in good repair
and proper order. All decays.

I am on Thoreau's railbed, I think.
And here is a graveyard
on and on.
Lives that came and went.

But produced, evidently, headstone-carvers
who survived them.

Someone made the wheelbarrow new, once.

So what I crave to know
is someone somewhere building a wheelbarrow today?
Will someone again build a wheelbarrow tomorrow?



What's on my mind? My mind. It slurs, sloshes. It's troubled, but not by trouble. Just turbulent. Waves. Eddies. I knock on the glass, and within I answer back with a smile, and bubbles. And swim away, drifting loose in the bowl. I mouth the words. He shakes his head in reply. This is real, he says. This is us. We're happening.

But if nothing comes of it? So much disappointment. So much guilt, embarrassment at hopes forgotten, expectations not just unfulfilled, but denied the honor of regret.

Slow, slow. And one says that's the way, and one says that's nothing, that's void. Lack.

And I won't know. Even in the final accounting. So the question remains, as ever, what to do, when the answer will never be known. Alethetropy assumes return, not question alone. So if there is no answer, if the sum is never taken, what then? Then, then. Then there is no alethetropy. Only as an exercise, only the subjunctive. But the subjunctive is. Its referent absent, still it is, and gives power. So is alethetropy. It is not something we can have. It is a direction, not a destination. All we can do is cast the rope ahead, for the next. The best we can give them, is that we looked about, opened our eyes, and tried to discern the light, before we let go.


Needle and thorn

Walk, before you die, among catbriar and pine. Drink their lofty dappled air.

A haze of vegetable steel. Twining vines, sharp points. To walk that path is to yield to its piercing embrace, enveloped, bound, penetrated.

I would, if I could, stand still. Let the questing soft climb enchain me. Become the heart of a swaying pillar of this bark-walled cathedral to the filtered sun. Be grasped caressed gently inexorably by sweet curling tendrils jade.

And my troubled turmoil rot, sink to loam, and feed the singing still forest mind.



Why am I running? Why am I running away from what I have to do?

I rationalize. I suggest, well, maybe it's for the best, because what you're supposed to do isn't really what you're supposed to do. But that doesn't sit well. First of all, it's obvious rationalization, and rationalization is an epistemic vice. Second, if it were true, I'd expect it to look more like doing something else, and less like just not doing this. I don't exactly have a bias toward action. And I feel like I should. I suppose it's possible that the right thing to do right now is nothing, but my upbringing tells me otherwise. My upbringing says, no, you only get to use that excuse if you're doing something more important, not if you're doing nothing at all. And then the rationalizer says, well, I'm always doing something; suppose breathing or daydreaming is the more important thing? And we go around again.

I also have learned helplessness. And what an apt learner I am. Even when I protest, it's always phrased as "Why can't I?" Never do I suggest that maybe I just could. There is implicitly some savage dark power shackling me. It must be dark, indeed. However I peer into the shadows, I do not see it.

Certainly we can't have evolved lazy. I admit, it makes sense to conserve resources. It makes sense to minimize risk. But there has to be evolutionary pressure to do what we must.

What if that's it? What if we have evolved to do what we must? And here, in the Age of Fructose, there is so little must to go around, action escapes us. We're content to sit in a great crowd, picking nits from one another's fur, and we don't get up to forage for fruit, because fruit is all around us. At our fingertips. Clinging to us. Weighing down our bellies and buttocks. Instead of hiding in the trees where it belongs, beckoning to us to climb.


You can't make the truth not be. Once it is written, it cannot be unwritten. You can add to the truth and thus change it, but you cannot remove what is, once it is. If you make a mistake, you can never not have made it. But you can fix it, if it's the sort of mistake that can be fixed, and thus bring the sum of some facet of the truth back to where it was. And, probably, that facet is what you wished to change to begin with.

The universe is write-once memory.