To-Do Lists Suck

December 10, 2012

You start a todo list full of grit, or maybe of excitement. You’re about to get Organized. You write down all the things you have to do, confident that you’ll come back and add more things when you think of it. Or maybe you enter them all into your app or your phone. But then what happens? Do you do all those things? I don’t. I do one or two of them maybe, but mostly, I never look at the todo list. There are so many things on it! It’s overwhelming. Worse, I have to scan the whole list to pick out a thing to do. Or maybe I don’t, but my eyes scan it anyway. And the list is full of things that I don’t really need to do, because it’s so easy to just add extra things, making it harder to pick out things that matter and making me feel more backlogged than I am.

There are two fundamental problems with todo lists. It’s too easy to put an item on a to-do list, and too hard to take one off of one. And looking at your todo list is painful because it puts mental images of all the things you need to do in your head, instead of just one thing.

I have an idea that I believe solves both problems. What about an app that only ever shows you the *most* urgent thing on your list? Then you never see your whole overwhelming list and you don’t need to do any thinking about which thing to do. So how does the list know which thing is the most urgent? By making it harder to add new things to the list. It keeps all your items ordered by urgency, and when you add a new item, it does a binary insert with you as the comparison operator. If you tell it you need to do X, it pick the middle item M on the list and asks you “Is X more urgent than M?”, and then recurses down whichever half of the list you indicated.

Does anyone else want this? I don’t get around to working on it very often. I’m doing an app engine version of it here if you want to help: https://github.com/jmathes/do-just-one-thing (I am not a designer)

 


Python debug logging that works for me

December 3, 2012

No, it’s not going to get open-sourced just now. It’s too specific to Sauce Labs. Instead this is an overview of how it works and what it does.

Usually when I want to debug some code, I do it by logging values during execution. When I do, I don’t just want to see the value; I also want to see the value’s name, or the expression, or whatever, so I can pick it out more easily when reading logs. In the past that has meant having to have an (optional) second argument where you pass the name:

  • debug_log(my_value, “my_value”)

This is where Python being a scripting language came in handy. In a dev environment, you still have the .py files with actual code hanging around, so you can use tracebacks to grab the code on the line the debug() call came from.

That’s the main advantage. From there I added some more cool features. If the value being logged is a “primitive”, I just print its repr and type. If it’s an object, I print its type name, and all its public attributes’ reprs and types, but not recursively. I also show their docstrings if they’re available.

This is a fairly simple chunk of code, but it has saved me a ton of debug time. If you develop in a language where tracebacks and run-time object introspection are available, it’s highly recommended.


A Go AI in 2 hours

November 26, 2012

Journalists used to think computers would never beat humans at chess.

You probably remember Kasparov vs Deep Blue. Back in those days, journalists were beginning to come around to the reality that computers were going to be better than humans at chess. They didn’t like it. Their go-to way to console the human race was the older, deeper board game go. Computers would never beat humans at go.

Every six months, for the rest of their lives, Will and Scott meet and Will plays go against a computer program written by Scott. They’ve been doing it for almost 10 years.  Then, a couple years ago, my friend Dusty joined this bet; now Will simultaneously plays go against Dusty’s AI as well. I’ve spectated since then, until last time, when I started feeling ambitious and signed up as the third AI.

Today was match day, and it was the first day my AI would participate. When I woke up this morning, I had done exactly zero work. Also, I had slept in, so we were late to the match. So that gave me the duration of the other two go games to write the simplest program I could that was still technically a go AI. I didn’t aim high.

I went with Python because I think it’s the easiest language to rapidly prototype in. I used GoGUI to avoid writing a UI. GoGUI communicates with go AIs using GTP; it spawns a child process and talks to it over stdin and stdout.  The initial setup of downloading GoGui and getting a project ready took about 10 minutes, mostly spent on false paths. Then the first hour or so of work was spent writing a barebones protocol handler. This is not easy to do under time pressure! The protocol spec is too dense to get much out of without reading a lot, so I learned the protocol groundhog-day style; by running my program over and over again and making it one step further through the handshake each time. Logging was key. Once that was done, I had a bot that could do the initial handshake and convince GoGUI that it was a valid go AI, but it didn’t make any moves. It took a couple of seconds to write a bot that would always make the same move, but after the first time it makes the move, the move is illegal, so this didn’t really count as an AI. But it was very close.

The next couple of minutes were spent writing an internal representation of the board state so I could reliably make random *but still legal* moves.  If you’re a go player, you’re probably thinking I had to encode a bunch of go rules in order to avoid playing into atari. Turns out there’s a rules variant in which playing into atari – which is a suicide move – is still legal, albeit almost always terrible.

The good news was it was finished in time to participate in the match. The bad news is that while go programs are now smart enough to beat humans, they’re not yet good enough to beat humans by playing random moves. I lost spectacularly.

You can see my go bot, CotiGo, on github. Maybe someday it will be better. A nice low bar to cross will be making it good enough to beat me. I’m very bad at go.


People can work harder if it’s not hard to work

March 23, 2012

There are a million studies that talk about how many hours a week will get maximum productivity out of a knowledge worker. They disagree with each other. Why do some people seem to perform best on 20 hours a week, while successful startup founders can put in 100?

I could put in 16 hour days easily and stress-free if my job description was “do whatever you want.” I already do that for free whenever I can. My job is not “do whatever you want” and it never has been. But my jobs have not always been “do exactly this one thing you hate”, and that middle ground is the uncontrolled variable confounding the million studies.

If you want some people to perform better, and the question you’re asking is “how many hours a week yields maximum performance,” you’re being an idiot. They can push themselves harder if their jobs are less stressful. Go watch the movie Office Space.

Things that make peoples’ jobs easier: give suggestions instead of ultimatums, don’t make up artifical deadlines, listen to them, give them creative freedom and a sense of ownership, minimize the coordination overhead they have to do, get out of their way, don’t interrupt them, etc. This is really not news. The point is that they can work harder if you make working feel good. It’s condescending to think it’s coddling people to make their jobs easier. You’re a manager. Your job has way more creative freedom, sense of ownership, and power. That’s why you can work harder and longer.

If you think it’s unrealistic to make peoples’ jobs less painful, here’s a helpful trick you can use to brainstorm better: stop being an asshole.


How to Win an Argument (don’t use this power for evil)

January 10, 2012

There are two kinds of disagreement. I’ll call one debate and the other discussion. In a debate, each participant thinks of the other as an opponent, and is trying to defeat him. A discussion is cooperative; the participants share resources to discover what the truth is between their seemingly contradictory ideas.

Have you ever noticed how in a debate, neither participant is ever convinced by the other’s point of view? That’s not a coincidence. People are physically incapable of learning information they don’t like. It happens in the limbic system; you can read more here and here. In a debate, each person’s personal investment in the outcome makes it painful, and thus difficult, for him or her to understand the opponent’s ideas. This mode of discourse is evolutionarily advantageous, because even though you can’t convince the opponent, you can convince the audience that the opponent is stupid. I suspect that this is why in a debate have the urge to insult our opponents directly, giving birth to the ad hominem fallacy.

In a debate, all your opponent cares about is winning. If you want to them to see that their side is false, you have to show them that changing their mind is not the same as losing. This is extremely difficult, and I have almost always failed to do so in practice. Saying so explicitly will not work. They will be offended by the implication that they were refusing to change their mind, and will accuse you of a transgression against an invented rule of debates.

There is only one effective way I have found of convincing the opponent that they can safely agree with you. Find a thing you yourself were wrong about, and happily, respectfully concede that point to them, in a way that causes you to lose face. To this end, it may be effective to arrogantly make points you know are wrong, so that you can concede them later.


Maybe this will get me posting again

September 16, 2009

3k hits a day’s what this blog struck
down to none now; it’s run by a slack schmuck
so I picked a slick schmuck cure;
strict limerick structure
if it works, I was right. Wish me luck!


iPhone dev 3: Steven and 3.0

May 13, 2009

Last time I blogged about iPhone development, it was like nails on a chalkboard to Steven, a friend and coworker of mine and the author of Routesy San Fransisco.  When I got into work that day, he called over to me: “Joe, next time you work on your iPhone development, maybe I should help.”  This drew some laughter from several other coworkers, who it turns out also read my blog.

I took him up on this generous request.  He’s writing a chapter for a book on iphone development (sorry, I can’t find a link to it).  After he helped me get into a state where I can start developing – which involved a bunch of steps I no longer remember, sorry again – we walked through the first part of his chapter, and he gave me an electronic copy of it to take home and walk through further.  I did some objective C programming, as well.  It has some funny features.  It uses reference counting to persist objects, but you have to handle the references manually.  Also, it uses this weird square-bracket notation for calling member functions on instances: [MyInstance doThing].  If there’s a single parameter, it’s like this: [MyInstance doThing:Parameter].  If there are two parameters, it’s even weirder: [MyInstance doThing:FirstParameter secondParameterName:ActualSecondParameter].  The internal name of the second parameter comes right before a colon, same as the name of the method itself.  I’m sure this comes naturally after you’ve been coding objective C for a while, but it’s confusing still.

I won’t be getting any practice on this today, since I’m busy downloading the iPhone 3.0 SDK, which is now mandatory.  Apple sent out an email to all developers saying that all apps will now be tested against 3.0.  This really only means they have to be forwards-compatible, but there’s no reason to bother with 2.2 when I don’t know what will and won’t be.  Instead, I’m using my 2 hours of personal project time to update all my blogs and research quaternions for my GDC notes series.


Follow

Get every new post delivered to your Inbox.