Friday, May 30, 2014

Some tips for how to do science

I was having lunch with Ed Ballister to celebrate his freshly defended thesis (congrats!), and we were thinking: aren't there some best practices to being a good scientist? Like a set of tricks for how to do science? I think maybe many scientists resist this notion because we think is will restrict our creativity. I don't think this is the right way to think about it, though. I still believe in creative inspiration, but the point of our scientific training is to give us the tools to generate a line of inquiry to take a creative idea and shape it to make interesting conclusions. People often "test" this ability in thesis committee meetings, but rarely do we talk about how to develop it. Anyway, sort of abstract, but here are some concrete tips:

  1. What would god do? Often times, we pose a question, but our thinking is stifled by constantly getting hung up on experimental constraints. One way to open up thinking that Gautham and I like is to just imagine there are no experimental constraints. If you could measure anything and everything, what is the precise comparison you would do to make the conclusion you want? Then work down to reality from there. For instance, we were interested in thinking about whether cell volume can control gene expression. If we were god, we could imagine just changing cell volume and seeing what happened. Turns out we could actually do something similar to that experiment in real life.
  2. Avoid whack-a-mole controls. Sometimes, you end up with a hypothesis for what could be happening, but there are tons of other potential effects you have to control for. This can lead to endless controls, none of which are particularly satisfying. Far better to have a single clever control that definitively answers the question. Here's an example: in the single cell biology field, one of the big questions early on is whether cell-to-cell variability in gene expression is due to stochastic effects or "other global effects". These "other global effects" could be just about anything, like position in cell cycle, amount of ribosome, whatever. You can try and eliminate each one, but it's impossible because of the unknown unknowns. Far better is this beautiful experiment of Michael Elowitz, in which he measured expression from two distinguishable copies of the same gene: ALL unknown unknowns would affect both copies to the same extent, resulting in correlated variability, whereas random variability would be uncorrelated. There was definitely uncorrelated variability, hence expression variability is at least partially due to random effects. Beautiful experiment because of the beautiful control.
  3. Write down a draft of your paper as soon as possible. Writing tends to expose lazy arguments and tell you what controls you need that you haven't done yet.
  4. Think carefully about the experiments that are only interpretable in one direction. You know, the ones in which if you get one answer, then you can say something awesome, but if you don't get that answer, it's just hard to say anything at all. These experiments can be very useful, but quite often don't work out as intended, and if a project is made up of a lot of these sorts of experiments, well, tread carefully.
  5. Whenever you think you figured out what's happening, try to come up with at least 1 or 2 alternative hypotheses. Perhaps one of the biggest issues in the era of scientific "stories" is that we all tend to avoid thinking about alternative interpretations. If you have a hard time coming up with plausible alternatives, then you might be on to something. Another related idea is to take all your data, forgetting about how you got there, and decide whether your main point still fits with most of the data. Sometimes you start down a path with one piece of data, formulate a vision, then try to fit all the rest of the data to that vision, even when the data taken as a whole would probably point in another direction. I've seen several papers like this.
  6. Pretend you were a mean and nasty reviewer of your own paper. What would be the key weaknesses? I've found that you usually already know the flaws, but just don't want to think about them or admit them to yourself.
  7. Think very carefully about the system you plan to use BEFORE starting experiments. Once experiments get going, they have a momentum of their own that can be hard to overcome, and then you might be stuck fighting the system all the time instead of just getting clean results. These can be big things like choosing the wrong cell line or model organism, or small things like targeting the wrong genes.
  8. (Related) That said, it's often hard to anticipate all the ways in which experimental vagaries can torpedo even the most well thought out plans. A scattershot approach can be useful here. Like, if your RNA-seq gives you 20 potential hits, don't just follow up initially on the top hit–try the top 3-5. For example, we had a situation where we tried to measure RNA from this one particular stem cell gene, and for whatever reason, the probe just did not work right. We spent months on it, and it just never worked right. Then we just picked out another 2-3 genes to look at and got great data right off the bat. The point is that some things may work and some may not, and it would be a shame to not follow up on something because the one test case you happened to pick didn't pan out.
  9. (Related) My friend Jeff liked to say that there's a strong correlation between projects in which things work well right away and projects that ever work. I think this is spot on, partially because of the principle of Conservation of Total Project Energy. Simply stated, there's only a certain amount of time you can spend on a project before you just get sick of it, and if you spend forever for example getting the basic assay to work, then you just won't have the energy to do cool stuff with that assay. If it works right away, though, then you still have the energy left to use the assay in interesting ways.
  10. Avoid salami-slicing experiments. There is a strong tendency to keep doing experiments that you know will work, perhaps with very incremental changes. These experiments typically don't tell you very much. This comes from the fear of the unknown. Fight against that tendency! What is the experiment that will give you the most information, that will decisively eliminate plausible alternatives? Chances are you might already know what that experiment is. Just do it!
Anyway, these thoughts are helpful. Please do comment if you have any additional ideas. I would love to hear them and will happily cull them together into a follow up post.

Wednesday, May 28, 2014

A dream about meeting an editor from Science

We just submitted a paper to Science about a week ago, and it's currently sitting in editorial review. Actually haven't been thinking about it all that much–as always, the chances of going to review are miniscule, so I'm not getting our hopes up. But I did have a dream about it last night. I dreamt that I was somehow in the neighborhood of Science HQ, which was a nice little wooden house in a sunny and quiet part of town. I walked in and some editor guy was sitting behind a nice big desk. I said hello and then I asked somewhat sheepishly "So, have you decided on whether to review our manuscript?" To which he crinkled his face and said "Well..." Then he called out to someone in the bathroom: "Hey Bev [Beverly Purnell, editor likely to be the one looking at this manuscript], what do you think, should we send this paper out to review?" She called out "No". Then he wrote a big NO on the manuscript and handed it to me. "Sorry."

Now I was faced with a fork in the road. I could walk out with my tail between my legs (think with your head) or put up a fight (think with your heart). This being a dream, I opted for the latter. I got mildly belligerent, saying "What do you mean, no? This work is absolutely fundamental! It's so important! This is so much more important than all those 'protein X interacts with protein Y to do Z' papers you guys publish!" He looked at me and hemmed and hawed a bit. It became very awkward for both of us. After a few moments pause, I said, "Next time, I should probably just wait for the e-mail, right?" To which he replied "Yeah."

Monday, May 26, 2014

The magical parts of R that you'll miss in pandas

- Gautham

I am a fan of R for data analysis, but for various reasons I am learning to use its best-known competitor: the pandas package written for Python. pandas has achieved considerable clout in the loosely-defined data science community, and is reportedly replacing R in everyday use.

The switch has been quite unpleasant for me so far. As far as I can tell the main advantages of pandas are:

  1. Speed and efficiency. It is apparently very fast. Even faster than data.table (the R package you should read up on and use if your data.frame calculation is taking too long). Speed is one of the most important concerns for pandas's creator, Wes McKinney. 
  2. Integration into larger software. Python is a great language to build software in. pandas is close to the best one could imagine for doing R-like work while within Python. Python is fun and clean for writing modular code. Being able to write the data analysis pipeline and the web server that dispenses the results both in the same language is a great boon.
So it is clear why data scientists are using it. R can't compete with Python's very nice module structure, or its ecosystem of packages for general-purpose programming. 

Usually, if you go on the internet and ask about R vs pandas, the main advantage listed for R is the immense number of specialized packages for statistical computing and plotting. That is not at all what I am missing since I've switched. 

Its only certain packages and certain behaviors. Its the magical parts of R that you will miss if you switch. The part where you write a short script with a few ddply, transform, subset and ggplot commands and you're already looking at a beautiful, informative plot of your data.

You might wonder why no other data analysis languages seem to feel like that.

Its because they can't. 

All the uniquely sweet aspects of programming data analysis in R have to do with Lazy Evaluation, its approach to evaluating expressions in function arguments. Nearly nothing in Python nor Matlab is allowed to be lazy about evaluation, so no matter how hard you might work, you cannot truly reproduce those features. Instead, at best, you end up messing around with quotation marks everywhere.

"When you call a function in MATLAB, MATLAB first evaluates all the inputs, and then passes these (possibly) computed values as the inputs." - Loren Shure (same thing in Python)

"R has powerful tools for computing not only on values, but also on the actions that lead to those values. These tools are powerful and magical. If you’re coming from another programming language, they are one of its most surprising features" - Hadley Wickham


In fact, I used to hesitate to use bare unquoted expressions in R for the first few months I used it because it was unfamiliar and scary. But after a while it became an irreplaceable part of my workflow. If you don't need your data analysis to fit into a larger piece of software, or to be blazing fast, stick to R's expressiveness. They don't have that magic anywhere else.

(for an excellent description of R's magic by R's greatest magician, see: http://adv-r.had.co.nz/Computing-on-the-language.html#nse)


Saturday, May 24, 2014

All hypotheses in biology are true

Scarcely a moment goes by in my scientific life without hearing the words "hypothesis-driven science". Now that scientists have been driven mad by the poor funding situation and inane NIH guidelines for hypotheses in specific aims, it's all you ever hear everywhere you go. I think the notion that it's important to have a hypothesis has somehow replaced the more fundamental premise that one should think carefully about what they are doing and why, but these two views are not equivalent. It is possible to have a stupid hypothesis, and it's possible to have well thought out discovery-based experiments. Now that data is easy to generate, there is more room for the latter, but I believe this excessive focus on hypothesis is at least partly why most NIH grants have become so boring and conservative. Ultimately, yes, part of biomedical scientific training is to develop an idea and test it experimentally, but the strict hypothesis-based approach often promulgated in grants and thesis committees, etc., is a fairly narrow and inaccurate description of how real science moves forward, in my opinion.

Whatever, lots of people have written about hypothesis-driven vs. discovery-based science, and to sum up my thoughts, I think it would be far more useful to spend our time discussing good science vs. bad science. Instead, I wanted to point out another issue with hypothesis-driven science, which is that now that our measurement tools are better, every hypothesis is true. Yes, that's an exaggeration, but let me explain. I feel like when you study cells, everything you do affects everything else. If I knock down expression of gene A, expression from any randomly chosen gene B is pretty likely to change. Perhaps not by much, but now that we can measure things so well, you can quantify that change and it will be statistically significant. So let's say you have data saying Protein X binds to Protein Y, and because of this somehow you formulate the hypothesis that expression of Gene A affects expression of Gene B. If you do enough RT-PCR or RNA FISH or whatever, you will find an effect. So your hypothesis is true. In the old days, if you had such a hypothesis that led to a small effect size, you probably would not have detected it, and so you would have accepted the null hypothesis. But nowadays, with RNA-seq and RNA FISH and so forth, all these small effects are detectable. I think a strict hypothesis-driven approach is ill-equipped to deal with this issue, but actually a discovery-based approach can be powerful. You can say: well, Protein X binds to Protein Y and so I expect that expression of Gene A affects something. Then use RNA-seq to find out what that something is! This approach has its problems and limitations as well, but it's not inherently wrong.

I think this "every hypothesis is true" effect is partly why there are more and more irreproducible (or perhaps more accurately, inconsequential) papers out there, with lots of hypotheses that are technically true but whose biological meaning is unclear. I think that it would be more useful to think about how and why different data fit together. This is hard work, I believe harder than just formulating hypotheses. It requires careful reasoning, not just about controls for simple tests, but about alternative interpretations of the data as a whole, and also some amount of inspiration as to why studying this is meaningful in the first place. To me, that is what makes something good science, not just the fact that you managed to write down a hypothesis.

Friday, May 23, 2014

The quesarrito lives!

So Chipotle apparently has a secret menu item called the quesarrito. Paul and I had been talking to each other about it for a while (Paul first told me of the existence of this mysterious beast), but neither of us had actually ever gotten one. Until yesterday!!! I ordered the quesarrito, at which the gracious tortilla tosser flashed a little smile and started to put one together. For those of you who don't know, the quesarrito is two tortillas sandwiched around some melted cheese (i.e., a quesadilla) which then serves as the wrapper for a burrito. Seemed gluttonous, but I was hungry today. So I went for it. Somehow my burritos barely make it through the wrapping process, and the quesarrito fared even worse:

Nowadays, I always get double beans/no rice, which I think has something to do with it. Regardless, though, I don't think the quesadilla-as-wrapper did us any favors. The stiffness of the shell certainly made it difficult to wrap. She did her best, though, and even offered to start over, but it looked good enough, so off I went, extra napkins in tow.

And then I tried to eat this thing. It was BIG. But actually not as unwieldy as I had feared. The cheese had solidified, and so if anything, the shell actually held its shape better than a regular tortilla would have:


Slowly but surely, I gobbled this thing up. Verdict: yummy! But honestly, not that yummy, and certainly not proportionally more yummy than a regular burrito. I asked the burrito maker whether they get many repeat customers for the quesarrito, and the answer was no. I think that will be the case for me as well. But I'm certainly glad I did it!

Thursday, May 22, 2014

The case for paying off your mortgage

Just reading NYtimes.com, which had an article about whether buying or renting is better. They're saying that now that the housing market is perhaps over-inflating again (seriously, people, what the heck?), it's often better to rent than to buy. Overall, I would have to agree with that in most cases, not just by the raw numbers, but because people always underestimate the costs of maintaining a home (which are absurd–costs like $2K just to cut down a tree, etc.), and because people don't factor in the time and hassles associated with home maintenance as well. To the latter point, I think homes are best thought of as a hobby for the home improvement weekend warrior. Not my kind of thing.

But let's say you have kids, and, sigh, you own a house. A common bit of financial wisdom is that, especially when mortgage rates are low, you should take out a big mortgage and pay it off slowly, because you can invest that money and make more with it. Roughly, the argument is that if your mortgage costs you 4% interest and investing nets you 5% return, well, you could be making 1%. There are some problems with this thinking. First off, one of the common arguments for it is that you save a lot on taxes because mortgage interest is deductible, so the effective cost is actually lower. True. But what every calculator I've seen fails to take into account is that you're only saving the difference between the itemized deduction and your standard deduction. If you have a family, which is probably often the case for people who own a house, then your standard deduction is pretty sizeable. Let's say that your mortgage interest costs you $15K per year, but your standard deduction is $12K. Then the real savings on your taxable income from the mortgage is just $3K, NOT the $15K that the most use in their calculators. If your tax rate is 25%, then this is $750–not chump change, but not really a game changer in the grand scheme of things, and much less than the $3750 you would calculate without thinking about the standard deduction. (It is also true that this becomes less of a factor the bigger your mortgage is, another reason why the mortgage interest deduction is such a regressive policy.) The other thing is that your investments are taxed, which many calculators also don't take into account.

The other problem with considering investment return is that the actual return is, of course, unknown. On average, your investment will probably go up, but it's highly dependent on the details, even in a 30 year horizon. Look, if you gave some Wall Street-type a GUARANTEED rate of return of, say, 4-5%, you can bet your life they would invest in it (and, in fact, they do). What's the interest rate on your savings account? Or even your CD? Almost always less than 1%. So the banks are basically saying that they can only offer you a guaranteed return of well under 1%. What accounts for this spread? One factor is that little market inefficiency called, you know, bankers' salaries. The other is the fact that not everyone pays off their mortgage, so there is some risk that the bank takes on. But if you believe in your own ability to pay off your mortgage, then it's a guaranteed return: every dollar you put in will earn you roughly 4-5% per year without fail.

Also, for most of us, sitting around and calculating this stuff all day is just about the least interesting thing we could do with our time. As Gautham says, peace of mind is the only thing worth anything in this world. Most readers of this blog probably choose to push themselves out of their comfort zone by pursuing science, and have precious little mental energy to waste worrying about other stuff. So whatever, just sign up for autopay and forget about it. Better yet, just move into an apartment.

Wednesday, May 21, 2014

A proposal for measuring paper impact

It's really hard to know a priori which papers are important. Some are obvious, like Yamanaka's iPSC work, but most of the time, it's hard to tell. "Journal quality" provides some proxy, as do citations, but both are fairly imperfect–the latter is perhaps somewhat more useful than the first, but both are subject to the scientific fads of the time.

To avoid the fad issue, what if we all had to vote on our favorite papers that are 5-10 years old, 10-20 years old and 20+ years old? Perhaps that would allow us to determine what papers were really important for shaping science? Of course, getting people to vote would be hard, but as a crude proxy, what if we just measured the number of citations after 5 years or 10 years? I feel like when I cite papers that are more than 10 years old, it's usually because it really was important in shaping the field. I'm certain this data exists in citation databases like Google Scholar and crappy old Web of Science–I'm wondering if anyone's already done this sort of analysis...

Sunday, May 18, 2014

Peace of mind and getting rid of peer review

I wrote a blog post a little while ago about how to review a paper, and it clearly tapped into a vein of deep angst for many–it is clearly on many people's minds. Why? Why is it so hard to think about anything else when your paper is in review, even though there's just nothing you can do about it? Brings to mind a point Gautham likes to make, which I find myself agreeing with over and over again: peace of mind is the only thing worth anything in life. And putting yourself out there when you submit a paper is clearly the exact opposite of seeking peace of mind.

There are many among us who stridently advocate for getting rid of peer review entirely (I would certainly count myself one of them). I wonder if on some level this sentiment is driven not just by the flaws perhaps inherent to the current peer review process but also by the desire to reduce the deep personal anxiety associated with the process. I think it's certainly true that peer review introduces unnecessary pain. I also wonder if there really would be much less anxiety in a post-peer review world. Let's face it: criticism is a bitter pill, period. Even generally constructive reviews are usually unpleasant the first time you read them. Sometimes, the reviewers will point out a flaw that you already knew is a problem deep down, and just don't want to admit it to yourself. (Of course, waiting until you submit is a bad idea, but somehow it happens...) Are these problems going to go away if we had post-publication peer review through, for example, comments appended to your paper? I feel like we'll still live in denial about the flaws in our own work, and we'll still get that little hot rush you get when you get a comment pointing out a potential flaw in our argument or data. Submitting a paper for peer review is anxiety producing, but so will be publishing without peer review, although probably in a less arbitrary way.

I think that's because doing science itself is inherently anxiety producing. We're putting something out there that (hopefully) is new and changes our view of the world a little bit. With this comes a natural fear that we screwed up, that our analyses and interpretations are wrong, and that we will be attacked for it. This will be true regardless of what form peer review takes. Well, that's not entirely true. If you don't have our current peer review system, then there's a chance that nobody will comment on (or read) your paper at all. Less anxiety, for sure, but is that really what one would want? Do we really do science for peace of mind?

(I should say that I am absolutely NOT advocating that a benefit of our current peer review system is that it ensures that at least someone reads your paper (probably 2 of the 3 reviewers on average :). It is probably true in many more cases than we would hope, but in and of itself, it's not a benefit.)

Saturday, May 17, 2014

Kids and robots

Just got a Roomba for automating vacuuming. Totally awesome so far. Here's a funny moment: my son was sitting on the floor watching the Roomba, and pet it as it zoomed by him. Is this foreshadowing my children's future? Or will it be the robots that pet us on the head?

Friday, May 16, 2014

People are different

Been catching up with a lot of friends in Boston the last couple days, and it got me thinking about how different people are from each other. Makes me wonder whether mice would look at us and say that we all basically seem like wild-type humans...

Wednesday, May 14, 2014

Decision making is exhausting

Making decisions is just SO tiring. Actually, I think it's not so much making the decision itself, which is actually rather fast, it's the slow and agonizing process of rationalization part that really wears you out.

Sunday, May 11, 2014

Dishwashers are all the same

Last weekend, I was quite proud of myself for fixing our dishwasher. We thought it was on its last legs because it was not cleaning very well at all. So, of course, I turned to Google and after typing in "Kenmore dishwasher not cleaning well", I found this terrific video explaining how to clean our your dishwasher after some moderately involved disassembly with a T15 Torx head screwdriver. It was totally gross in there! But after cleaning it out, back to normal and working just as good as new (yes, I'm very proud of myself, even if just meant following some simple instructions). How amazing is that, I thought, to just Google for a few minutes and come across the perfect live demonstration of how to clean exactly my kind of dishwasher? Turns out, though, that underneath the hood, all these dishwashers are the same! The all use the same metal innards and the same basic assemblies, just with some extra bells and whistles added or removed from various models. Kind of weird, no?

Saturday, May 10, 2014

Basketball and graduate admissions

Graduate school admissions season is over, and now for the big question: which of these kids is going to pan out? Everyone knows the kid who looked great on paper but was a complete dud at the bench, and conversely the kid who had super mediocre grades but was just a straight up monster in the lab. If only we knew ahead of time! What are we missing?

Meanwhile, I've been watching the NBA (basketball) playoffs, and I've been thinking that talent evaluators there are faced with exactly the same sorts of issues. There are many more infamous busts than stars, and it's startlingly hard to predict who will be what. It also occurs to me that deciding who to admit based on grades or "IQ" (I won’t even dignify the GRE with a mention) is like admitting players into the NBA based just on their height. Yes, basketball players are definitely taller than average people, just like our graduate student pool is probably smarter than average. But just being tall doesn't guarantee that you will be a good basketball player at the highest level of the sport. Michael Jordan certainly wasn't the tallest player in the league. Similarly, most of the best scientists I know aren't necessarily the smartest people I’ve met. They're just smart enough, then it depends on other factors.

Another crazy fact I heard is that of all the people in the world over 7 feet tall, 1 in 6 plays on the NBA! Wow. This brings up the question. Other than their height, are all these people really suited for being a professional athlete? How can such a high percentage be strong and fast on top of being so tall? The p-value on that would seem infinitesimal. I think the answer is training–a professional athletic program can build up your other abilities to complement your intrinsic advantage of height. Continuing the analogy, in science, these kids are coming into graduate SCHOOL, and as the word school implies, that means they too are entering a period of training. When they come out, they will have hopefully developed their science muscles so that they are wise and knowledgeable and all those other things you learn in graduate school that go beyond just being smart. They will be trained to be a professional scientist, just like a professional athlete.

Of course the question you're asking yourself is whether it's wise to train all of these people to be professional scientists when many of them won't continue in that role. Just like professional athletes, our time as professional scientists is limited. Some folks like me get really lucky and end up being coaches. Some of us end up doing other things related to “the game”, and some go on to do other things. But I think that the training you get sticks with you and will influence you for the better for the rest of your life. Marshall (my first graduate student) told me that when he was working at this one company, he said he could easily tell which people had PhDs and which people didn’t. And I think he meant that in a good way… :)

Back to graduate admissions, what are the right criteria to judge, then? Hehe, well, if I knew, I certainly wouldn’t say! Which is just another way of saying I don’t know. But I remember reading a NYTimes.com article once about some scientist who found a gene variant vaguely associated with athletic ability. The journalist clearly wanted something like “if you have this gene, you will be an athlete”, but the scientist, to his credit, said something like “Look, if you want to know which kid will run the fastest, put them all in a line and say ‘Go!’” I guess the same is true of graduate school. I think this is why good postdocs are such hot commodities–at that point, there is a track record to point to that says “This person runs fast”, which is why they typically have their choice of opportunities in the biggest and fanciest labs. Again, sort of like in basketball: small market teams must build through the draft because only the big market teams can attract the free agents.

Oh, and here’s one scary thing about this analogy: when the team starts losing, the first person they fire is the coach!

Tuesday, May 6, 2014

Simple rules for when not to send an e-mail

E-mail, despised and scorned as it may be, is still pretty awesome. I think it's safe to say that it has had a transformative (and largely positive) effect on how we do work. Seriously, try to imagine living without e-mail for a while. Yes, there are people who have gone just to Twitter or have found other ways to manage their life without e-mail. I don't know how they do it.

That said, there are of course situations in which e-mail just doesn't work, and it makes much more sense to pick up the phone, like if previous e-mails have been confusing or talking about something sensitive in which you need to gauge the other person's feelings. How do you know? Here are some signs I watch out for:

  1. If it's taking you more than 10 minutes to write the e-mail, call.
  2. If you rewrite a particular sentence 5 times, call.
  3. If the thought runs through your mind about how someone will interpret your words, call.
  4. If there is any likelihood that your e-mail will make someone angry, call.
  5. If you are angry, call.
Basically, e-mail works great if the message is unambiguous and not inflammatory. If it's not, chances are it's a bad idea. You can spend eons crafting the perfect message and still piss someone off, or just call and maybe smooth everything over in 5 minutes. Which reminds me that I should get a cell phone.

Sunday, May 4, 2014

Change yourself with rules

Can a person change? Loaded question, one with many answers. I have vacillated on this many times myself, but I think I have an answer now, and that answer is yes. The question is how. I think the most effective thing for me has been to just find a rule that makes concrete a particular principle you want to abide by and then stick to it. Simple as that. I think it's much more effective than big sweeping generalities, because it's quantifiable: Did I break my rule? Did I not break my rule? If you have to will to stick to the rules, then you can change. And I feel like simple rules can have profound effects.

Here's a little example from my own experiences. At some point, I saw one of Uri Alon's tips on how to give a good talk, which is that every slide must have a title that is a complete sentence–subject, verb, object. I found this transformative in putting together my talks, because now every slide has a point, one to the next. And it's just such a simple rule. At first, I found myself fighting this rule, because it was a big change. But now I can't even imagine preparing a talk any other way. It's the perfect example of a rule: quantifiable, actionable, consequential.

Lots of other rules, many of which are well known. Another rule I try to follow is to avoid the use of "you" language in my interactions, which is bad for communication (e.g. compare: "You are doing that wrong" to "I usually do that differently"). For a while, I thought, "Hmm, I can't think of how to say this without saying the word you". But I followed the rule and just kept my mouth shut, and afterwards I realized that I was honestly just better off not saying anything. It worked! Over time, it just becomes second nature. And that is real change.

Saturday, May 3, 2014

How much is my time as a reviewer worth according to Oxford Press?


Just got a paper to review with the following heretofore unusual addendum:
A prompt and useful review that is received within 14 days will entitle you to either a free CD (Chandos catalogue) or a £5 discount on a OUP book. (Remuneration takes place annually.)
Man, that really swings the needle... ;)

Seriously, a $10 DISCOUNT ON A BOOK?!? From just one particular publisher? Jeez guys, don't break the bank on my account. If you spend 3 hours reviewing the paper, they're valuing your time at $3 per hour. Irony is that it probably costs them more to process the payment than the "payment" itself.

Then again, this is infinity fold more than I usually get. Too bad they didn't offer free pizza. Now that's a bribe I can get behind.

Friday, May 2, 2014

"Honors" are like "Delicacies"

I wrote before about how a lot of low grade publishing survives because of our egos. Here's another way to put it. Just like different cultures label their gross food that nobody wants to eat as "delicacies" (1, 2, 3, yucky!), scientists label their crummy jobs that nobody wants to do as "honors". Sometimes stuff really is an honor, but if someone asks you to do something that is a lot of work under the guise of it being an honor, well, think twice. It's probably more work than honor.