Saturday, December 27, 2014

Three observations about anonymity in peer review

I made a vow to myself to not blog about peer review ever again. Oh well. Anyway, I have been thinking about a few things related to anonymity in the review process that I don’t think I’ve heard discussed elsewhere:
  1. Everyone I talk to who has published there has raved about eLife. Like, literally everyone–in fact, they have all said it was one of their best publication experiences, with a swift, fair, and responsive review process. I was wondering what it was in particular that made the review process so much less painful. Then somebody told me something that made a ton of sense (I forget who, but thanks, Dr. Insight, wherever you are!). The referees confer to reach a joint verdict on the paper. In theory, this is to build a scientific consensus to harmonize the feedback. In practice, Dr. Insight pointed out that the main benefit is that it’s a lot harder to give those crazy jackass reviews we all get because you will be discussing it with your fellow reviewers, who are presumably peers in some way or another. You don’t want to look like a complete tool or someone with an axe to grind in front of your peers. And so I think this process yields many of the benefits of non-anonymous peer review while still being anonymous (to the author). Well played, eLife!
  2. One reimagining of the publishing system that I definitely favor is one in which every paper gets published in a journal that only publishes based on technical veracity, like PLOS ONE. Then the function of the “selective journal” is just to publish a “Best of…” list of the papers they like the best. I think that a lot of people like this idea, one which decouples assessments of whether the paper is technically correct from assessments of “impact”. In theory, sounds good. One issue, though, is that it ignores the hierarchy on the reviewer side of the fence. Editors definitely do not just randomly select reviewers, nor select them just based on field-specific knowledge. And not every journal gets the same group of reviewers–you better believe that people who are too busy to review for Annals of the Romanian Plant Society B will somehow magically find time in their schedule to review for Science. Perhaps what might happen is that this new version of “Editor” (i.e., literature curator) might commission further post-publication reviews from a trusted critic before putting this paper on their list. Anyway, it’s something to work out.
  3. I recently started signing all my reviews (not sure if they ever made it to the authors, but I can at least say I tried). I think this makes sense for a number of reasons, most of which have been covered elsewhere. As I had noted here, though, there is “Another important factor that gets discussed less often, which is that in the current system, editors have more information than you as an author do. Sometimes you’ll get 2/3 good reviews and its fine. Sometimes not. Whether the editor is willing to override the reviewer can often depend on relative stature more than the content of the review–after all, the editor is playing the game as well, and probably doesn’t want to override Prof. PowerPlayer who gave the negative review. This definitely happens. The editor can have an agenda behind who they send reviews to and who they listen to. So no matter how much blinding is possible (even double blind doesn’t really seem plausible), as long as we have editors choosing reviewers and deciding who to listen to, there will be information asymmetry. Far better, in my mind, to have reviewer identities open–puts a bit of the spotlight on editors, also.” Another interesting point: as you work your way down the ladder, if you get a signed negative review, you will know who to exclude next time around. Not sure of all the implications of that.
Anyway, that’s it–hopefully will never blog about peer review again until we are all downloading PDFs from BioRxiv directly to our Google self-driving cars.

Friday, December 26, 2014

Posting comments on papers

For many years, people have wondered why most online forums for comments result in hundreds of comments, but even the most exciting scientific results lead to the sound of crickets chirping. Lots of theories as to why, like fear of scientific reprisal or fear of saying something stupid or lack of anonymity.

Perhaps. But I wonder if part of it is just that it feels… incongruous to post comments on scientific papers. To date, I have posted exactly two comments on papers. My first owed its genesis (I think) to the fact that I had just read something about how nobody comments on papers, and so I was determined to post a comment on something. And it was a nice paper on something I found interesting and so I wanted to say something. I just now wrote my second comment. It was on this AWESOME paper (hat tip to Sri Kosuri) comparing efficiency of document preparation using Word vs. LaTeX (verdict: LaTeX loses, little surprise to me). Definitely something I found interesting, and so I somehow felt the urge to comment.

And then, as I started writing my comment, something just felt… wrong. Firstly, the process was annoying. I had to log in to my PLOS account, which I of course forgot all the details of. Then, as I was leaving my comment, I noticed a radio button at the bottom to say whether I had a competing interest. The whole process was starting to feel a whole lot more official than I had anticipated. Suddenly, the relatively breezy and light-hearted nature of my comment felt very out of place. It’s just very hard to escape the feeling that any commentary on a scientific paper must be couched in the stultifying language and framework of the typical peer review, which is just so different than the far more informal commentary than you get on, for instance, blog posts. And heaven forbid if you actually posted a joke or something like that.

I feel like part of the reason nobody comments is that publishing a paper seems like a Very Serious Business™, and so any writing or commentary associated with it seems like it should be just as serious. Well, I agree that publishing a paper is a very tedious business, but I think making scientific discourse a bit more lighthearted would be a good thing overall. And who knows, one side-effect could be that maybe someone might actually read the paper for a change!

Tuesday, December 23, 2014

Fortune cookies and peer review

Ever play that game where you take the fortune from a fortune cookie and then add “in bed” to the end of it for a funny reinterpretation? I’ve found it works pretty well if you just replace “in bed” with “in peer review”. Behold (from some recent fortune cookies I got):

Look for the dream that keeps coming back. It is your destiny in peer review.

Wiseness makes for oneself an island which no flood can overwhelm in peer review.

Ignorance never settles a question in peer review.

In the near future, you will discover how fortunate you are in peer review.

Every adversity carries with it the seed of an equal or greater benefit in peer review.

You will find luck when you go home in peer review.

Also reminds me of the weirdest fortune I ever got: “Alas! The onion you are eating is someone else’s water lily.” Not sure exactly what that means, in peer review or otherwise…

Saturday, December 20, 2014

Time-saving tip–make a FAQ for almost anything

One of the fundamental tenets of programming is DRY: Don’t Repeat Yourself. If you find yourself writing the same thing multiple times, you’re creating a problem in that you have to maintain consistency if you ever make a change, and you’ve had to write it twice.

In thinking about what I have to do in my daily life, a lot of it also involves repetitive tasks. The most onerous of these are requests for information that require somewhat length e-mails or what have you. Yet, many times, I end up answering the same questions over and over. Which brings up a solution: refer to a publicly available FAQ.

I first did this for RNA FISH because I was always getting similar questions about protocols and equipment, etc. So I made this website, which I think has been useful both for researchers looking for answers and for me in terms of saving me time writing out these answer for every person I meet.

I also recently saw a nice FAQ recently (can’t find the link, darn!) where someone had put together a letter of recommendation FAQ. As in, if you want a letter of recommendation from this person, here’s a list of details to provide and a list of criteria to determine whether they would be able to write a good one for you.

Another senior professor I met recently said that she got sick of getting papers from her trainees that were filled with various errors. So she set up a list of criteria and told everyone that she wouldn’t look at anything that didn’t pass that bar. Strikingly, she said that the trainees actually loved it–it made a nice checklist for them and they knew exactly what was expected of them.

I think all of these are great, and I think I might make up such documents myself. I’m also thinking of instituting an internal FAQ for our data management in the lab. Any other ideas?

Sunday, December 14, 2014

Origin and impact of stories in life sciences research: is it all Cell’s fault?

I found this article by Solomon Snyder to be informative:

http://www.pnas.org/content/110/7/2428.full

Quick summary: Benjamin Levin realized in the 80s that the tools of molecular biology had matured to the point where one could answer a question “soup to nuts”. So his goal was to start a journal that would publish such “stories” that aimed to provide a definitive resolution to a particular problem. That journal was Cell, and, well, the rest is history–Cell is the premier journal in the field of molecular and cellular biology, and is home to many seminal studies. Snyder then says that Nature and Science and the other journals quickly picked up on this same ideal, with the result that we now have a pervasive desire to “tell a story” in biomedical research papers.

I was talking with Olivia about this, and we agreed that this is pretty bad for science. Many issues, the most obvious of which is that it encourages selective omission of data and places undue emphasis on “packaging” of results. Here are some thoughts from before that I had on storytelling.

I also wonder if the era of the scientific story is drawing to a close in molecular biology. The 80s were dominated by the “gene jock”: phenotype, clone, biochemistry, story, Cell paper. I feel like we are now coming up on the scientific limitations of that approach. Molecular biology has in many ways matured in the sense that we understand many of the basic mechanisms underlying cellular function, like how DNA gets replicated and repaired, how cells move their chromosomes, and elements of transcription, but we still have a very limited understanding of how all this fits together for overall cellular function. Maybe these problems are too big for a single Cell paper to contain the “story”–in fact, maybe it’s too big to be just a single story. Maybe we’re in the era of the molecular biology book.

As an example, take cancer biology. It seems like big papers often run from characterizing a gene to curing mice to looking for evidence for the putative mechanism in patient samples. Yet, I think it is fair to say that we have not made much progress overall in using molecular biology to cure cancer in humans. What then is the point of those epic papers crammed full of an incredible range of experiments? Perhaps it would be better to have smaller, more exploratory papers that nibble away at some much larger problems in the field.

In physics, it seems like theorists play a role in defining the big questions that then many people go about trying to answer. I wonder if an approach like this might have some place in modern molecular biology. What if we had people define a few big problems and really think about them, and then we all tried to attack different parts of it experimentally based on that hard thinking? Maybe we’re not quite there yet, but I wouldn’t be surprised if this happened in the next 10-20 years.

(Note: this is most certainly not an endorsement for ENCODE-style “big science”. Those are essentially large-scale stamp collecting expeditions whose value is wholly different. I’m talking about developing a theory like quantum mechanics and then trying to prove it, which is a very different thing–and something largely missing from molecular biology today. Of course, whether such theories even exist in molecular biology is a valid question…)

Saturday, December 13, 2014

The Shockley model of academic performance

I just came across a very interesting post from Brian McGill about William Shockley’s model for why graduate student performance varies so much. Basically, the point is that being successful (in this case, publishing papers) requires clearing several distinct hurdles, and thus requires the following skills:
  1. ability to think of a good problem
  2. ability to work on it
  3. ability to recognize a worthwhile result
  4. ability to make a decision as to when to stop and write up the results
  5. ability to write adequately
  6. ability to profit constructively from criticism
  7. determination to submit the paper to a journal
  8. persistence in making changes (if necessary as a result of journal action).
Now, as Brian points out, if you were 50% better at all of these (not way beyond the norm, but just a little bit better), then your probability of succeeding in your assigned task (which is the product of the individual probabilities) is roughly 25 times better. This is huge! And it’s also to me a reason for great hope. The reason is that if, alternatively, being 25 times better required being 25 times better at any one particular thing, then it seems to me that it would require at least some degree of unusually strong innate ability in that one area. Like, if it was all about writing fast, then someone who was a supernaturally fast writer would just dominate and there’s nothing you could really do to improve yourself to that extent. But 50%? I feel like I could get 50% better at a lot of things! And so can you. Here are some thoughts I had about creativity, writing with speed, execution and rejection, and there are tons of other ways to get better at these things. Note that by this model, by far the most important quality in a person is the ability to reflect on their strengths and weaknesses and improve themselves in all of these categories.

I think this multiplicative model becomes even more interesting when you talk about working together with people in a lab. One point is that establishing a lab culture in which everyone pushes each other in all regards is critical and will have huge payoffs. Practically, this means having everyone buy in to what we collectively think of as a worthwhile idea, how we approach execution, how to write, what our standards of rigor are, and sharing stories of rejection and subsequent success through perseverance. This also provides some basis for the disastrous negative consequences of having a toxic person in lab: even if the effects on each other person in the lab in all or even some of these qualities are small, in aggregate, it can have a huge effect.

The other point is delegation strategy. It’s clear that in this model, one must avoid bottlenecks at all costs. This means that if you are unable to do something for reasons of time or otherwise and the person you are working with is also unable to do that task, things are going to get ugly. The most obvious case is that most PIs have only a limited capacity (if any) to actually work on a project. So if a trainee is unable to work on the project, nothing will happen. Next most obvious case is inability to write. If the trainee is unable to write and you as a PI have no time or desire to write, papers will not get written, period. Deciding how much time to invest in developing a trainee’s skills to shore up particular weaknesses is a related but somewhat different matter, and one that I think depends on the context.

This model also maybe provides some basis for the importance of “grit” or resilience or motor or drive or whatever it is you want to call it. These underlie those items on the list that are the hardest to change through mentorship. If someone just doesn’t have an ability to work on a project, then there’s not a whole lot you can do about it. If someone does not have the determination to do all the little things required to finish a project or to stick to it in the face of rejection, it will be hard to make progress, and there’s not much that you can do to alleviate these deficiencies as a mentor. I think many PIs have made this realization, and I have often gotten the advice that the most important thing they look for in a person is enthusiasm and drive. I would add to this being open to reflection and self-improvement. Everything else is just gravy.

Sunday, November 23, 2014

The most annoying words in scientific discourse

Most scientific writing and discourse is really bad. Like, REALLY bad. How can we make it better? There are some obvious simple rules, like avoiding passive voice, avoiding acronyms, and avoiding jargon.

I wanted to add another few items to the list, this time in the form of words that typically signify weak writing (and sometimes weak thinking). Mostly, these are either ambiguous, overused, or pointless meta-content just used to mask a lack of real content. Here they are, along with my reasons for disliking them:

Novel. Ugh, I absolutely hate this word. It’s just so overused in scientific discourse, and it’s taken on this subtext relating to how interesting a piece of work is. Easily avoided. Like “Our analysis revealed novel transcript variants.” Just say “new transcript variants”.

Insight. One of the best examples of contentless meta-content. If any abstract says the word insight, nine times out of ten it’s to hide a complete lack of insight. For example: “Our RNA-seq analysis led to many novel insights.” Wait, so there are insights? If so, what are these insights? If those insights were so insightful, I’m pretty sure someone would actually spell them out. More than likely, we’re talking about “novel transcript variants” here.

Landscape. Example of a super imprecise word. What does this mean anyway? Do you mean an arrangement of shrubbery? Or do you mean genome-wide? In which case, say genome-wide. Usually, using the word landscape is an attempt to evoke some images like these:


Now exactly what do these images mean? Speaking of which…

Epigenetic. Used as a placeholder for “I have no idea what’s going on here, but it’s probably not genetic”. Or even just “I have no idea what’s going on here whatsoever”. Or chromatin modifications. Or all of this at once. Which is too bad, because it actually is a useful word with an interesting meaning.

Paradigm. Need I say more?

Robust. Use of the word robust is robust to perturbations in the actual intended meaning upon invoking robustness. :)

Impact. As in “impact factor”. The thing that bugs me about this word is that its broad current usage really derives from the Thomson/Reuters calculation of Impact Factor for journal “importance”. People now use it as a surrogate for importance, but it’s always sort of filtered through the lens of impact factor, as though impact factor is the measure of whether a piece of work is important. So twisted has our discourse become that I’ve even heard the word impactful thrown about, like "that work was impactful". It's a word, but a weird one. If something is influential, then say influential. If it’s important, then say important. If an asteroid hits the moon, that’s impact.

These words are everywhere in science, providing muddied and contentless messages wherever they are found. For instance, I’m sure you’ve seen some variant of this talk title before: “Novel insights into the epigenetic landscape: changing the paradigm of gene regulation.”

To which I would say: “Wow, that sounds impactful.”

[Updated to include Paradigm, forgot that one.]
[Updated 12/13: forgot Robust, how could I?]

Saturday, November 22, 2014

Verdict on a (mostly) Bacn-free week of e-mail: totally awesome!

It’s been one week since I tabulated my e-mail and decided to run a few experiments based on the results. Quick recap: I found that I got a lot of Bacn (solicited but often unimportant e-mail, like tables of contents and seminar announcements), and this was contributing to a sense of being overwhelmed by e-mail. So I resolved to do the following:
  1. Filter out primary conveyors of Bacn to a Bacn folder that I would skim through rapidly just a few times a day.
  2. Deal decisively with the e-mail when I read it–either reply or get off the pot, so to speak.
Quick summary is that this experiment has been a great success! I feel much more efficient, less overwhelmed, and less likely to miss important things. Highly recommended.

Here’s a few more details. So I have two e-mail addresses. For the most part, one of them gets all my work e-mail, and the other one is mostly personal, but has a lot of Bacn and spam in it. Before, I had been combining both into my inbox. So that was easy: just check my work e-mail and separate out the personal one to check over on an as needed basis. Of course, I’m still getting a lot of Bacn on my work e-mail, so I then made filters to automatically file Bacn into a separate folder. I initially thought this was going to be super simple. Turns out it was a bit more work than I thought: there are MANY different Bacn providers at Penn. So it took a while to set up a filter for each of them. But it worked: almost all the Bacn went to a specific folder.

The results were glorious! I found I spent much less time looking through all these unimportant e-mails during the day, and then I could batch process them much more efficiently during a period of downtime. There is little better than selecting a huge block of e-mail and deleting them all at once! A few times, I would get a real e-mail from a Bacner that I needed to respond to, but it turns out that they were never urgent nor terribly important, and I could deal with them during this downtime period (which is probably when I should be dealing with them anyway).

I didn’t anticipate how much this e-mail filtering would engender peace of mind. I guess I was expending more mental energy that I thought processing all these different e-mails in a single stream. The steady stream of notifications that we all know we should ignore but don’t thinned out considerably, and I felt like my focus was better. I didn’t quantify actual productivity gains there may have been (although I suspect there was some), but I can definitely say that the perceived quality of e-mail life went up considerably. Definitely felt like I was in much more control over what I was doing. Basically, it made it much easier to process e-mail the way I always knew I should in theory but rarely actually did in practice.

I think this filtering also really helped with the other aspect of my experiment, which was to be decisive (actually something I have been working on in general). The idea here was to read each e-mail only once before doing something with it, which means either marking as read or replying. Or at least getting as close to this ideal as possible. Since all the e-mails in front of me now have a similar status, I found it a bit easier to do this, because I’m not changing “modes” from one e-mail to the next.

Decisiveness is hard, and something I’ve struggled with for a long time, both in the context of e-mail or otherwise. And being deliberate is not necessarily a bad thing. But I think most of us tend to undervalue our time, and I feel like being decisive is making a tradeoff between making the best possible decision slowly and making a good enough decision quickly. Or, as is more often the case, making the best possible decision slowly and making the best possible decision quickly–indeed, I feel like much of the time, the “decision making process” is really more like a slow process of rationalizing a decision you’ve already essentially made. So I’m trying to just go with my instincts and then thinking, well, if I made a mistake, so be it. The key thing is to think to myself “Well, am I going to get any new information that might change my decision? If not, then go for it.” That actually takes care of a lot of situations, e-mail or otherwise.

UPDATE: Forgot to mention that I got two e-mails this past week from close collaborators with the subject line "Not Bacn". :)

Sunday, November 16, 2014

A week in my e-mail life

[Follow up post here: Verdict on a (mostly) Bacn-free week of e-mail: totally awesome!]

[Note: This is a longish post, so here’s an “abstract” that gets across the main points: Academics get a lot of e-mail. I decided to catalog my e-mails for the week to see if I could identify any patterns. I found that a large amount of my e-mail was “Bacn”, meaning e-mails that I am in some way supposed to get, but are typically not very important, like seminar announcements, etc. A lot of the more research-oriented e-mail was related to logistics, like shipping, etc. As for what to do about it, I think the number one thing is to pre-filter a bunch of the Bacn, which typically just comes from a relatively limited number of easily identified people and only very very rarely requires any sort of immediate action. This will help make it easier to process it in batch mode, which is another area where I could really improve how I handle e-mail, rather than replying in a more "real time" fashion. And I will try to be more decisive in handling e-mail. An update on how all this worked next week.]

As is the case for most academics these days, I get a lot of e-mail. And as is the case for most academics, I love to complain about how much time it takes up. I was thinking about this recently when I came across the line “E-mail is everyone else’s to do list for you.” Which I thought was an interesting way of thinking about it. I mean, just because someone has my e-mail address doesn’t necessarily give them the right to command my attention, right? But then I thought a bit more, and I wondered if my attention really is being dragged unnecessarily in unwanted directions, or is it primarily spent on things that I want to pay attention to. Are there ways that I can make myself more efficient?

So I decided to catalog all the e-mail I got in the last week. First, a couple notes on methodology. I basically just looked through my e-mail for the past week and tried not to delete anything (which I normally don’t do, except for spam). Going through, I categorized the e-mail (more on that later), kept track of whether I replied or forwarded the e-mail, and how long it took me to reply. I also kept track of whether the e-mail was initiated by myself or came from someone else and whether the e-mail was directed to me specifically or whether it was just a general broadcast (some judgement calls in this).

Here's what I found:

Good news is that I don't instigate a lot of e-mail, which makes me feel better about myself–in fact, so few that I didn’t really think it was worth doing a similar analysis on my sent e-mail. But I did reply to a relatively large number of e-mails. But now that I think about it, I would guess this is the case for most academics. Most of their e-mail misery comes from others randomly bugging you, and I think it’s usually just a handful of others.

As for speed of reply, I’m generally quite fast, but there’s a long tail:
Zooming in on the short time-scale:

A pretty substantial number of replies actually happened within minutes, sort of like texting or something, then a tail of longer times to reply.  I actually expected this to be a bit more bimodal, but it's pretty unimodal, but with a long tail. I did notice that I have chunks of reply e-mail at the beginning and end of the day, which is good–my intention lately has definitely been to try and do as much batch processing as possible. I think I could be more disciplined about this, though.

Of course, the key piece of data is what different sorts of e-mail I get. Here’s how I broke it down:
  1. Spam
    1. Spam spam. Like, Nigerian Bankers who have a great deal on Viagra for you. 
    2. Science spam. This is various marketing for HPLC equipment or strange journals or whatever. I get a lot of this, presumably because various vendors have sold my e-mail to direct marketers.
  2. Bacn. Bacn is a very interesting category. It is like spam, but a level up: it’s something where there is some sort of relationship there, including perhaps direct solicitation of the e-mail. Here is how I broke that down:
    1. Personal. e.g. NYtimes.com table of contents.
    2. TOC. Tables of contents of various journals.
    3. Science. ResearchGate, Nature Publishing Group
    4. Penn Bacn. Seminar announcements, thesis defenses, visitors, latest fund-raising drive.
  3. Scheduling. This includes setting up a meeting or lunch or whatever with someone, thesis committee meeting times, etc.
    1. Scheduling Bacn. These are scheduling e-mails in which you’re just sort of along for the ride. You don’t have to do anything, but the e-mail is there, perhaps asking you if you want to meet with so and so.
  4. Teaching. Students asking for help or whatever.
  5. Evaluations and Letters. Someone asking for you to evaluate a person or paper or whatever in some way, shape or form. An important part of our lives. I’m of course happy to do this for people who have been in my life in the lab. Less exciting is...
    1. Evaluations and Letters Bacn. This is any sort of evaluation of someone or something from outside. This includes, but is not limited to, reviewing papers.
  6. Research. This is what we’re supposed to be doing, right? Well, that all depends…
    1. Logistics. This is all stuff about orders, handling of manuscripts, lab organization, etc.
    2. Collaborations. This is managing various collaborations with other groups. This does not include close collaborators with whom we are doing real science together with. It’s more just like people whom we’re doing a one-off experiment with. Often, there is overlap with the Logistics category.
    3. Research Bacn: Seems like a weird category, right? These are what I would consider relatively unsolicited e-mails that are random and tangential to your research effort, but are science related. Like, someone sends you a link to a paper they wrote. Or someone had a thought after meeting with you. Or something. This is not quite Bacn in the sense that you may not necessarily be able to ignore all of it, but it’s not quite important enough not to be Bacn.
    4. Actual Research: This is, you know, actual research. Also a proxy for what I consider the most important to me. Mostly conversations with people whom we are working with closely about science. This can include making decisions about scientific goings-on in the lab, or thoughts on an experiment, or how to interpret something–basically, the fun part of it.

So what’s the breakdown? Here are some pie-charts (I’ll get to strategies I’m thinking about implementing later).








Let’s start with spam. Turns out I don’t get that much of it. It certainly doesn’t take that long to get rid of them. In fact, I have to say that I sometimes rather enjoy them for their humorous qualities. Here are four of my favorite examples:

Message 1:
Subject: ВОССТАНОВИМ ЗАПУЩЕННЫЙ УЧЕТ
Вы руководитель от Вас внезапно ушел бухгалтер!
Вас предали? Вы подставлены? Завтра налоговая?


БУХГАЛТЕРСКИЙ БЕСПРЕДЕЛ!!!

Message 2:

Subject: Лучший Новогодний подарок - безопасность ваша и ваших близких!

Message 3:
Subject: Your  Account Was Banned
This is a joke :)

Than trying to work mounted on clumsy, long webfeet by the
ecriture artiste which the french writers that hears. Similarly,
employing the eye, it is a moment without devoting his heart
upon mahadeva. Towards the abode of bhishma, casting aside

their.

Message 4:
Subject: Mandy - 100% results.
Gy, lol.Gy, lol.Gy, lol.Gy, lol.Gy, lol.Gy, lol.Gy, lol.Gy, lol.Gy, lol.Gy, lol.Gy, lol.Gy, lol.Gy, lol.Gy, lol.Gy, lol.Gy, lol.Gy, lol.Gy, lolGy, lol.Gy, lol.Gy, lol.Gy, lol.Gy, lol.Gy, lol.Gy, lol.Gy, lol.Gy, lol.Gy, lol.Gy, lol.

I think the Gmail spam filters do a pretty good job of getting rid of most of this cruft.

Bacn. This was perhaps the biggest surprise. Most of what I get is Bacn. And it’s super annoying to sort through, due primarily to the very nature of Bacn, which is something that you might conceivably be interested in. And one of the worst offenders is Penn! The amount of Penn Bacn I get is crazy. It’s primarily seminar announcements (and reannouncements (and re-reannouncements)) and various other random stuff that I may in theory want to know about, but I typically won’t. And it typically comes from a few prime Bacn distributors. The only problem is that I will sometimes get something important from these Bacners, and so I can’t just automatically filter them out into the trash. Hmm. These typically come mostly in the morning, which is when I try and get real work done.

Funny note about Bacn: I made some Bacn myself! Had to send out an e-mail to the graduate group about something or other. I feel sort of bad about it now. Even funnier, I even managed to send research-related Bacn to myself in the form of an e-mail to myself of a paper I thought I should read. Of course, I paid it about as much attention as all my other research-related Bacn… :)

Scheduling. Surprisingly large amount of e-mail just to schedule appointments. This was actually a relatively tame week in that regard, so I was sort of surprised how much e-mail circulated about that.

Research. Large number of logistic e-mails, often about shipping, etc. The shipping and ordering stuff doesn’t take up too much time, honestly, perhaps because we have a relatively small operation. It was interesting to see how much Research “collaborations” took up. To me, this is partly a matter of how much you invest in your scientific community, sort of like being a good citizen. That said, it is clear that this can suck your brain quite easily. Research Bacn is I think something that I get a lot more of than I imagine most people getting, for various reasons. Surprisingly (unsurprisingly?) little time spent on actual Research Research e-mails. Which I actually regard overall as a good thing: for most research discussions, I talk with the people in my lab directly. I think that is a far more efficient way to get things done, generally, and avoids those super long e-mails that take hours to craft.

So what to do with this data? I think I came to a few primary conclusions:
  1. I need to organize my e-mail so that the Bacn is out of sight most of the time. I try my best to ignore Bacn most of the time, but in practice, it takes a lot of discipline to avoid looking at all those e-mails during the day, especially when there are sometimes other interesting e-mails that interspersed in my inbox as well that I may very well want to deal with. To do this, I’ve implemented filters on Gmail to just send most of these to a specific folder that I will check once a day or so, hopefully in a really fast batch mode. There is some slight chance that I might miss a timely e-mail, but whatever. Looking at it now, perhaps this is obvious, but somehow I just didn't think of it before.
  2. I get a lot of research-related logistical e-mails that I should probably be delegating about ordering and the such. These are not quite Bacn, because I (or someone in the lab) do need to give some input or really read them, sometimes in a timely manner. But just as often not. I also noticed I got a few more of these this week than usual.
  3. Teaching: I didn’t get a lot of teaching e-mail this week, which is nice, but somewhat unusual. I actually have a specific teaching gmail account that I ask students to send to–this organization is very useful, and it allows me to make others do some of the organizing for me. Of course, you have to actually tell your students about it, which I of course forgot to do this term in my grad class. But I will definitely remember next term in my big required undergrad class. I will also be sure to have a policy that I only respond to student e-mails on one particular time of the week, no exceptions.
  4. Perhaps the most important lesson is to BE DECISIVE. Someone (and I’m so sorry, I forget who, and the comments got deleted) left an awesome comment on the blog somewhere about a simple rule, which is read each e-mail only once. I think that’s absolutely right. I definitely found myself reading an e-mail and then mulling it over and then mulling it over again. I have to not do that. If it requires thought, I should just make a (prioritized) to-do list item for it and then mark it as read and be done with it. Otherwise, I’m just cycling over and over again.
Anyway, those are some thoughts. I will try and implement this this week and post again once the results of this reorganization are in.

Sunday, November 9, 2014

My favorite quote about LaTeX

Argh, just finished struggling through submitting a LaTeX document to a journal. And I think I still screwed up and will have to do some more fussing. My only hope (and a fading one at that) is that things will not devolve to the point where I just have to copy the whole damn thing into Google Docs, where you can actually spend your time on, you know, doing real work.

So I just Googled around and found the following page, which has my new favorite quote about LaTeX:
Latex ("LaTeX" if you're pretentious as hell) is the biggest piece of shit in the history of both pieces and shit.
Yes.

(And yes, before you say it, I know what you are going to say.)

Saturday, November 8, 2014

“Edge of Tomorrow” and the case for better education

I just watched Edge of Tomorrow, a recent action movie with Tom Cruise, and it got me thinking about education. In case you haven’t seen it, it’s basically an action movie version of Groundhog Day, where Tom Cruise lives the same day over and over until he saves the world from alien invaders. Umm, well, that last sentence sounded pretty stupid, but I actually thought it was a pretty good movie.

Anyway, in the movie, Tom Cruise (Sgt. Cage) enlists the help of Emily Blunt (Rita Vrataski, super badass alien killer), and every time he relives the same day, he makes it a little further towards killing the aliens with her. He remembers everything that happened, but she remembers nothing. That means that he has to teach her everything that he has collectively learned every day, which is of course limited by the capacity that she has to absorb all that information. It occurs to me that our own lives are a lot like Rita’s day. Each of us is born knowing nothing, and we have exactly one lifetime to learn the collective knowledge of the world (and hopefully add to it) before we die. As our civilization’s knowledge burgeons, we have to get better at cramming this stuff into our kids' brains, because they still just have one lifetime to learn an ever increasing sum of knowledge and then to use it. Somehow, thinking about it this way makes me think that it’s really sad that we haven’t paid as much attention to how we educate as we should. I mean, I guess I already knew that, but it just seems to take on a bit more urgency for me when I think about it this way.

Hmm. I can’t believe I just made an analogy between Tom Cruise and the collective knowledge of the world. I need a drink.

Friday, November 7, 2014

My water heater is 100% efficient (in the winter)

Just had a thought while taking a shower the other day. These days, there's lots of effort to rate appliances by their efficiency. But it occurs to me that inefficiency leads to heat, and if you are heating your home, then you are basically using all that "wasted" energy. So even if some of the gas used for our water heater doesn't actually heat the water, as long as its in the basement and the heat travels upward, that heat is not going to waste. So the effective efficiency of the appliance is actually higher than expected. Conversely, in summer, if you use the air conditioner, the opposite is true. I guess the overall efficiency would depend on your mix of heating and cooling.

I was also thinking about this a while ago when I installed a bunch of LED lightbulbs. Although they use much less energy, they are producing much less heat to warm up the house. I mentioned this to Gautham, and he pointed out that using electricity to heat your house may be considerably less efficient than, say, natural gas, and so that means it's not 100% efficient, relatively speaking. Still, it's better than what one would naively expect.

Of course, the best thing about LED lightbulbs is not so much the electricity or cost savings (which are pretty modest, frankly), but the fact that they don't burn out. If you have a bunch of 50W halogen spotlights, you know what I mean. By the way, just got a "TorchStar UL-listed 110V 5W GU10 LED Bulb - 2700K Warm White LED Spotlight - 320 Lumen 36 Degree Beam Angle GU10 Base for Home" from Amazon, and it looks great (better than the other one I got from Amazon for sure).

Thursday, November 6, 2014

Why are papers important for getting faculty positions?

Loved Lenny's post about how a high profile paper out of your postdoc is not required for many positions in academia. The list he has is pretty good proof of that fact, and I know firsthand from my own experience–I think my "big postdoc paper" was just submitted by the time I had my last interview.

I think it's important to keep in mind, though, that the existence of such examples is not a proof that there are no causal connections between the two. I think a lot of this is field dependent as well as institution dependent. For instance, I definitely feel like my job search might have been easier with a published paper, especially in biology/medical departments. And I have definitely heard of places, for example in other countries, in which applicants have been explicitly told that the job is theirs if and only if their postdoc paper is accepted. And I have heard this multiple times, so it was not a one-off.

Why? If the search committee understands the work and the researcher and believes in them both, then why does the existence of an accepted high profile paper matter so much in and of itself? A big part of the answer is that visibility matters.

One thing I realized after starting my faculty job was that starting a lab is a hard business, and part of that business is getting people interested in your research. There are tons of people out there doing science. Why should someone want to join your lab? Why should anyone care about your work? Why should anyone give you funding to do this work? Why should you be the one to succeed when everyone else is out there doing good science as well? Having a high profile paper when you start is undeniably a part of the answer to these questions. And it’s also a simple metric of success that is readily interpreted by people across disciplines.

Departments generally want the people they hire to succeed. There are many reasons why it's a lot easier to succeed if you have a fancy paper as you are starting your lab. It helps in recruiting students and postdocs, and in getting grants and getting invited to talks. Same thing goes for coming from the lab of a big-name PI. The big-name PI will be out talking about your work at venues and forums that you can only dream about as a junior PI. These are all different pieces of the puzzle, and nice papers are for sure an important piece of that puzzle, for better or for worse. And the fact is that there is at least some correlation between where you publish (especially averaging over time) and the quality and importance of your work. Not a perfect one for various reasons, and I hate the current publishing system, but it is disingenuous to pretend that this is not the case.

I'm sure many people out there are saying "It should all be about the science, not where it's published or who you worked with or all that other stuff." Sure, sounds nice in theory, but in practice, it's harder than people think. Imagine you are in the market for a washing machine. You go to the store and there are hundreds of washing machines to choose from. Some come from name brands, some are completely unknown. Some of name brand ones are rated in Consumer Reports by a handful of "washing machine experts", and some are rated much higher than others. Which one would you buy? Now imagine you are in the market for a colleague for at least the next 6-7 years, hopefully the next 30-40+ years, and you will be investing millions of dollars in this person and be interacting with them regularly on a professional basis. Their success or failure will reflect directly on your department. You better believe people make a pretty considered decision here. And yes, visibility matters. Personal connections matter. Papers matter. Your personality matters. Your science matters. EVERYTHING matters. Seriously, think about it: how could it possibly be otherwise?

Monday, November 3, 2014

Why don’t bioinformaticians learn how to run gels?

Just read an interesting post from Sean Eddy about genomics. Lots of points there about sequencing and big science and other stuff that seems well above my pay grade. But the post also brings up the notion that biologists should be able to do their own data analysis, in particular scripting with Perl/Python. I’ve heard this subjected debated before many times, and I’m sure I’ll hear it again. But I don't think it's the right way to think about it.

First off, I want to say that I agree with the underlying premise in theory. Yes, it would be great for everyone to have some basic skills in quantitative analysis and programming. It would certainly be useful for biologists to be able to analyze their own data, and we do all our own analysis at the command line in the lab, typically using tools graciously and freely provided by others. For others with different skills and interests, there is finite time in the day, and maybe they don’t have the time and inclination to learn this stuff. To require biologists to learn to do things at the command line is I think missing a huge opportunity, and is also a bit unfair.

Consider the following: how many bioinformaticians are required to learn and perform library prep to do their work? And what if we told them to “just figure it out by Googling around”? I’m not even talking about understanding all the various technical aspects of library prep, I mean even just doing the basic protocols. Probably not very many have been required to do this. I’m sure they could do it and figure it out, but why should they, you might ask? A reasonable question. Well, then why should biologists be subjected to the pain of shell/Perl scripting just to figure out if some genes’ expression went up or down? Why does this work in only one direction? Remember, scripting is NOT SCIENCE. It is just a tool. I see no reason why everyone should have to learn about all the details of every tool in order to do their science. This even applies just within the realm of computation: how many people who use the log function know anything about how to implement it? Going up the chain, I don’t need to know why MATLAB uses Householder transformations to compute a QR factorization instead of Gram-Schmidt or even that it does so at all–I can just call it and trust that MATLAB does the best thing by default. That is the nature of a mature tool.

Indeed, it is particularly ironic to hear these calls for DIY learning from genomic informaticians, when the experimental side of that same work is amongst the most commoditized and standardized bench work in existence (funnily enough, to a point where bioinformaticians might actually be able to do it with only minimal training!). Basically, add and remove liquids to/from each other for 1-2 days, squirt it in some sequencing chip and say go, then download the data. It’s pretty close to the big green “GO” button that everyone dreams about. And it comes from years of careful thought and consideration about the needs of the USER of the tool, not of the provider. Make no mistake, the technology underlying sequencing is very complicated and sophisticated. But the reason sequencing has taken off the way it has is because USING the (hardware/wetware) tool is very simple. Just like scripting/data processing, sequencing is not science, but a tool. It is, at this point, a much easier to use one than analysis software, in my opinion.

I of course appreciate that part of the reason that sequencing itself is so well developed is because there are huge companies with tremendous resources backing the effort. Fair enough. Perhaps it will require a commercial effort to build an easy to use pipeline for analysis. Maybe not. Either way, though, I think the main thing to keep in mind if you are in the tool business is that if you want people to use your tool, you will get a lot further by LISTENING (and I mean actually listening) to your users and their needs than you will by simply telling them about all the things that they ought to do and ought to know. It’s hard work, and requires a lot of thought and attention, and I certainly understand the sentiment that it may not fall within the purview of academic work. But I think it needs to happen one way or another. In the same way that simplified mobile operating systems brought computation to many more people than before, so will easy to use bioinformatics pipelines bring sequencing tools to many more biologists, which is a good thing.

This is most certainly not to say that biologists shouldn't be getting some more quantitative training, especially in computers. There is no doubt that learning some principles of programming and quantitative/statistical analysis can be hugely beneficial, given the way science as a whole is headed. Again, that is not the same thing as learning scripting. In fact, being able to script is completely unrelated to quantitative thinking and only moderately related to any high level concepts in programming. It is busywork, plain and simple. In my lab, we do quantitative work, and writing these scripts is still basically what I would consider a big waste of time. We can do it, but it has nothing to do with science, quantitative or otherwise, and most of us would much rather not have to bother. Even worse for science is that the requirement of scripting leaves those who can’t do it because of limited time or whatever out in the cold.

Oh, and by the way, I think Galaxy is a great step in this direction. Bravo to the developers, and thank you for your hard work!

Update, 11/4: In case you're wondering if we practice what we preach, we have two versions of our image analysis software. One is open source, very powerful, completely extensible, fancy software engineering, etc. The other one is super limited, but designed for use by scientists, not programmers. Both are freely available, but guess which one gets used by orders of magnitude more people...

Friday, October 24, 2014

The eleven stages of academic grief

Have had a spate of bad luck in the lab with several papers getting rejected. Ugh. Been getting used to the following cascade of emotions:

  1. Shock (30 seconds). E-mail from journal! Oh no, subject line says decision. Could be good decision, right? … Oh, not a good decision…
  2. Disbelief (1 minute). Really? Did I get the wrong email or something? Am I really reading this? Where is that link to resubmit? What do you mean there’s no link to resubmit?
  3. Reading the e-mail (2 minutes). Hmm. [Keyword search] “Should be published in this journal” Yes! “Not a big enough advance in the field” No! “Very exciting” Yes! “Hard to get excited about this paper” No!
  4. Anger with reviewers (5 minutes). What are they talking about? We already did that experiment in supp fig 97! Well, if I knew the answer to that, we would have submitted to a better journal! Oh, correlation isn’t the same as causation? Why didn't I think of that? Thank so much for your super wise words of wisdom dear reviewer. May you rot in hell, where you will have eternity to think about our paper, instead of the apparently 17 minutes you spent on this stupid review.
  5. Reviewers, part 2 (7 minutes). I bet that reviewer is [random perceived academic enemy]. Grudge deepening.
  6. Anger with editors (10 minutes). What are they talking about? Why don’t these people get a spine? Do these people even know anything about this field. Or any field. Or anything at all? They must be failed academics. Or just stupid. Or both.
  7. Self doubt, abilities (6 hours). I am a failed scientist. Soon to be a failed academic. Or just stupid. Or both.
  8. The dark path (1 day). Wait, but my paper is much better than this other stupid paper in a higher profile journal. What gives? [You know not to go down this road. But you will.]
  9. Self doubt, career choice (1.5 days). Why am I working so hard? Why didn’t I just go to industry and never have to worry about papers ever again? Why should I be sweating these stupid reviewer comments? Am I still going to be sweating these reviewer comments for the next 30-40 years? Is this really it?
  10. Resignation and acceptance (2 weeks). We will get this paper out in the end. Time to move on. This study is good, it just needs to find the right home. The darkest hour is just before the dawn. There are many fish in the sea. Every cloud has a silver lining. Who knows. Maybe the reviewers even had a point about that one aspect of our paper. Wait a minute… 
  11. Reviewers, part 3 (2 weeks and 30 seconds): $^!# those reviewers! What do they know anyway? What are they talking about? If I ever see [random perceived academic enemy] again… 
So goes the inner monologue. To the outside world, it looks like this:
  1. Revise
  2. Resubmit
  3. Rejected
  4. Revise
  5. Resubmit
  6. Rejected
  7. Revise
  8. Resubmit
  9. Rejected
Sigh… you’d think it would get easier with time. It does, somewhat. But it also doesn’t.

Thursday, October 16, 2014

What makes a scientist creative?

Science is about generating knowledge, but it’s also about the process of generating knowledge, and few things delight as much as creative ways to generate knowledge. Some of my favorite examples include ribosome profiling from Jonathan Weissman’s lab, or Michael Elowitz’s two color noise experiments. Not that all scientific progress comes from creative experiments, nor do the results of all creative experiments stand the test of time. It’s just that these are the ones that are so awesome that you never forget about them.

Some scientists are just really good at coming up with creative ideas (Sanjay Tyagi, my former PhD advisor, is one of them). Where does scientific creativity come from? There is I think some notion that creativity is an innate ability, but I’ve come to think of creativity as a skill, which has an important distinction: skills can be learned and honed, whereas innate abilities cannot. Some amount of creativity is innate (perhaps having as much to do with interest in a topic as raw brainpower), but if you have someone with the raw materials to be a creative scientist, then you can help shape that material to make that scientist more creative than they would be otherwise. How? Does some of this just rub off from the mentor to the mentee? What in particular is it that can rub off?

I’m guessing there’s a lot of psychology research in this area, but here is a thought that I had recently. It came from an e-mail I had with one of my (very creative) trainees, which was an awesome moment as an advisor. I had just e-mailed the trainee, posing a question like “hmm, what are the implications of these results.” My trainee wrote back, saying “well, could inform x or y”, which is pretty much the current thinking in the field. And then I got another e-mail 10 minutes later saying “These are both silly answers. It is definitely something to continue to think about.” I was so proud!

This exchange got me thinking that maybe one of the underappreciated elements of being creative is just not settling for being not creative. If you are in science, there’s a pretty good chance that you have ideas, probably many ideas, maybe all the time. The key is really in the evaluation. When am I just settling for the status quo of thinking? When is the status quo probably right and there’s maybe nothing here? When have I really hit the foundation of the problem we’re working on? If I could do any experiment to test this, possible or impossible, what would it tell me? What is the closest I can approximate that in the lab? These are all things that we can consciously think about and that mentors can teach their mentees, and I think it can help us to be creative. I also think that establishing a rigorous culture of idea generation and evaluation can help the group as a whole become more creative.

Thinking about creativity reminds me about when I was in a band back in college. The leader of our band, Miguel, was one of the most creative people I have ever met–lyrics and music came out of him in ways that seemed mysterious and divine. (Incidentally, I feel like not settling was a big part for him as well.) He was really good friends with this other amazing songwriter named Joel, and Miguel used to say “You know how I know that Joel is a better songwriter than I am? Whenever I play someone a song I wrote, they say ‘Man, how did you ever think of that?’ When they hear a song Joel wrote, they say ‘Oh man, why didn’t I think of that?’” Same applies in science, I think.

Sunday, October 12, 2014

Disabled Google Plus comments

Hi there readers,

Quite some time ago, I enabled Google Plus comments on this blog, not fully knowing exactly what that would do. Seemed like a good feature, I thought. Only just recently did I realize that it required people to be on Google Plus to leave a comment, which really sucks. So I'm disabling that feature, because I know it discourages some commenters (like my mom). Sadly, this means that virtually all the comments on the posts for the last however long will be gone (which is why I was reluctant to switch). So sorry about this! Just want to say that I really appreciate all the comments that people have left here, and the only bright spot in doing this is that maybe this will result in more people leaving comments. If there isn't any uptick in comments, I'll re-enable the feature and all the old comments will come back.

Arjun

Saturday, October 11, 2014

What have I learned since being a PI?

Our lab started at Penn in January 2010, and the last several years been probably the most busy and action packed of my professional life. I still vividly remember the very beginning, when we had far more boxes than people. Actually, I guess that’s still the case. But lots of other stuff has changed, and the lab now feels like the bustling, fun place I had always hoped it would be. What I had not anticipated was how much I would change and learn, both as a scientist and as a person, since I started. Here are some musings and observations:

- I realized that as a group, scientists (meaning grad students, postdocs, PIs and all the other folks that make a lab go) are pretty lucky. They are by and large smart, talented, driven people who could succeed in many different walks of life. They happen to do academic science, but can probably do many other things successfully. It would be okay to do so. Also, staying in science is a privilege, not a right, one handed out with a lot more care than many people think.

- I stopped worrying as much about my career. Like, I need this paper to get this grant to get this job to get this… whatever. Partly, I’m just too tired and busy to do so. Partly, though, it’s also because I have realized just how lucky I am to do something I love, which I think is very rare in this world, especially for something as generally useless to the world at large as science. Not to say that I don’t want to get papers or grants or tenure or anything like that, nor is it something that I never think about, but just saying that the day to day makes me happy, for the most part.

- Life is long and can take scientists in many different directions. Academics have curious minds and will always be searching for new challenges, and doing what I'm doing now is just one of those challenges.

- I really want to try to do something important. I’ve now been in science just long enough now to have seen a few scientific fads come and go, and while I’m not much of a scholar of science history, I think that experience has helped me gain a somewhat better perspective on when we really learn something about the world. I also realize that I will probably fail to do something important, because it’s just really hard to do so. But I hope to have fun trying.

- Related to this last point: it’s hard to predict where your science will take you, whether it will lead to something important or not, either in your time or the next. But the quality of how you execute your science and the conclusions you draw is the one thing you can enforce. And in a way, it’s the only thing that matters.

- I learned to not dismiss crazy ideas, and allow flexibility to let them grow. Starting out, I thought that I was going to run this super tight ship, with every project subjected to rigorous risk/reward analysis. I still think that’s actually not a bad thing and that most people don’t do enough of that, but sometimes its good to just let things go. Some of the best things going in the lab come from projects that I didn’t think had much future at the time.

- It is hard to change fields. Once you’re going in a certain direction, it’s what everyone expects of you: your trainees, your colleagues, yourself. On top of this personal inertia, the system is also set up to prevent you from changing fields, because you rely on your social network for papers, grants, etc. Your only hope is to develop enough clout that people outside your field might give you the benefit of the doubt. Or to just be such a small fry that nobody really cares.

- The colleagues I admire most are the ones who don’t take things too seriously, especially themselves.

- I’ve learned a lot about how to do science over the last few years, and I’m a much better scientist for it. How do you frame a problem? What can you really claim based on this data? What are alternatives? Looking back at myself coming into this job, I feel like I was hopelessly naive in so many ways, and now at least somewhat less so. I owe this development almost entirely to the incredible people in my lab, who really helped push me to think harder about virtually everything, and to my excellent colleagues here at Penn.

- It’s cheesy, yes, but it’s very satisfying to make a difference in someone’s life. A view from the outside is that this is about reaching students in class. That doesn't work too good for me–I’m not a natural lecturer, and as such, I think my classroom teaching is just OK, despite a fair amount of effort. But I love working with people (graduate students, undergraduates, postdocs) in the lab, and for me, that’s how I feel like I make the most difference. I had one undergrad tell me that working in my lab was his single best experience at Penn. That was so awesome!

- Speaking of connecting with people, this blog has also been one of the most fun things I’ve done since becoming a PI.

- Got a lot to learn about leading a group, but I have learned one thing: personnel isn’t everything, it’s the only thing.

- “Failing to reach a trainee” (i.e., someone flames out of the lab) happens to everyone. PI will be traumatized, trainee too. It sucks. And it has happened to virtually every PI I know. It’s just one of those things people tend not to talk about.

- Don’t give up on people. Or do? One school of thought preaches that people never change. Another school of thought is that there is some nugget of talent inside of everyone that is waiting to be nurtured. The truth is somewhere in between. I have now seen people who just can’t seem to figure it out no matter how much time gets put into them. I’ve seen others who seemed hopeless at first transform so utterly that it’s like talking with a different person by the end of their PhD. Personnel: completely maddening!

- For some aspects of running a group, there are clearly some right and wrong things you can do. But I feel like I've seen as many different paths to success as to failure. If you get conflicting advice, it probably means nobody really knows, so just trust your gut.

- Some people are out there to take advantage of you. Some people really want to help. Seek out the latter. Avoid the former. But you will encounter the former, so don’t let worrying or fuming about them take over your life because it will destroy you.

- Lots of stuff is broken. The temperature is off in your scope room. The bulb is out in the bathroom. The website for submitting grants was designed by masochists intent on making you cry up until the grant deadline. Some engineering undergraduates with good AP calc scores apparently don’t know what a derivative is. You can’t fix it all. Choose your battles.

Oh yeah, and one big thing I learned: setting up a lab is HARD WORK. One of the beautiful things about being young is thinking that you'll do it better yourself once you get the chance. Maybe. But I’ve developed a deep respect for anyone who has managed to set up a functioning, productive lab. Cheers.

Friday, October 3, 2014

A proposal for controlling the amount of paperwork

As anyone who’s tried to submit a grant knows, there is an absolutely enormous amount of paperwork involved. Budgets, front matter, various other little bits and pieces and forms. It’s so much paperwork that it’s basically impossible to apply without a professional grants administrator, which most universities have. In fact, I was recently working with someone who didn’t have access to a grants administrator, and I wanted to have him participate in a grant, and he said that he couldn’t because he didn’t have the time to figure out how to fill out all the forms. Yipes!

I’m sure there are plenty of studies about how paperwork tends to proliferate, but here’s my take on it and a potential solution. My feeling is that every bit of new paperwork comes from some sort of new initiative in which the new paperwork serves to encourage that goal. Like, “We want to promote diversity, so now include a minority involvement plan.” Or, in a recent grant, I had to include a Research Leadership Plan, presumably to encourage thinking about how the PIs will collaborate together. All laudable goals, so it’s sort of hard to argue with these being a good thing, right?

Well, the problem is that this leads to more and more paperwork as these encouraged goals pile up over the years. Here’s a solution, inspired, ironically enough, by the NIH. When we submit a grant, we have a page limit, right? This means that we have to make decisions–if you want to include a particular piece of additional data, then it must come at the expense of another. So why not have a paperwork limit? Like, you can have a certain number and length of forms and no further. Any increase in the amount of paperwork must come at the expense of some other paperwork. Any new form means you have to remove some older form. That would have the added benefit of forcing the paperwork producing bodies to think carefully about what forms are the most important.

Of course, this still has the flaw that people can change the paperwork required, which is annoying to keep up with–take for instance the updated NIH Biosketch. Ugh, annoying. But I guess we should be thankful they didn’t make us submit an additional Biosketch! :)

Sunday, September 28, 2014

Sigma's getting with it on Twitter!

Just got this e-mail from Sigma that feels like some 57 year old in marketing tasked with "engaging the youth through social media" heard about selfies and Twitter from their kids and decided to put it all together to try and "go viral". I really gotta get in touch with my field rep for a T-shirt!


Here's our lab's Sigma Selfie, starring cholesterol and calcium chloride:


Sunday, September 14, 2014

University admissions at Ivy Leagues are unfair: wah-wah-wah

Lots of carping these days about university admissions processes. Steven Pinker had some article, then Scott Aaronson had a blog post, both advocating a greatly increased emphasis on standardized testing, because the Ivy League schools have been turning away academically talented but not “well-rounded” students. Roy Unz (referenced in the Pinker article) provides some evidence that Asians are facing the same quota-based discrimination that Jewish people did in the early 20th century [Note: not sure about many parts of the Unz article, and here's a counter–I find the racial/ethnic overtones in these discussions distasteful, regardless of whether they are right or wrong]. Discrimination is bad, right? Many look to India, with its system of very hard entrance exams to select the cream of the crop into the IIT system and say, why not here?

Yeah. Well, let me let you all in on a little secret: life is not fair. But we are very lucky to live here in the US, where getting rejected from the Ivies is not a death sentence. Aaronson got rejected from a bunch of schools, then went to Cornell (hardly banishment to Siberia, although Ithaca is quite cold), then went on to have a very successful career, getting job offers from many of the same universities that originally rejected him. It’s hard not to detect a not-so-subtle scent of bitterness in his writing on this topic based on his own experience as a 15 year old with perfect SATs, a published paper and spotty grades, and I would say that holding on to such a grudge risks us drawing the wrong lesson from his story. Yes, it is ironic that those schools didn’t take him as an undergraduate. But the lesson is less that the overall system is broken, but more that the system works–it identified his talent, nurtured it and ultimately rewarded him for it.

Those who look elsewhere to places like India have it wrong, also. The IITs are rightly regarded as the crown jewels of Indian education. The problem is that the next tier down is not nearly so strong, thus not nurturing the talents of all those who were just below the cutoff for whatever reason. So all those people who don’t manage to do as well on that one entrance exam have far less access to opportunities than they do here. Despite these exams, India is hardly what one would call a meritocratic society. So again, I would not consider India a source of inspiration.

I understand the allure of something objective like an SAT test. The problem with it is that beyond a certain bar, they just don’t provide much information. There are tons of kids with very high SATs. I can tell you right now that my SATs were not perfect, but I’m pretty sure I’m not that much less "smart" than some of my cohort who did get perfect SATs. I did terribly on the math subject GRE–I’m guessing by far the worst in my entering graduate school class–which almost scuppered my chances of getting into graduate school, but I managed to get a PhD just fine. At the graduate level, it is clear that standardized tests provide essentially no useful predictive information.

I think we’ve all seen the kid with the perfect grades from the top university who flames out in grad school, or the kid from a much less prestigious institution with mixed grades who just nails it. Moreover, as anyone who has worked with underrepresented minorities will tell you, their often low standardized test scores DO NOT reflect their innate abilities. There are probably many reasons for why, but whatever, it’s just a fact. And I think that diversity is a good thing on its own.

So scores are not so useful. The other side of the argument is that the benefits of a highly selective university are immense–a precious resource we must carefully apportion to those most deserving. For instance, Pinker says:
The economist Caroline Hoxby has shown that selective universities spend twenty times more on student instruction, support, and facilities than less selective ones, while their students pay for a much smaller fraction of it, thanks to gifts to the college.
Sure, they spend more. So what. I honestly don’t see that all this coddling necessarily helps students do better in life. Also this:
Holding qualifications constant, graduates of a selective university are more likely to graduate on time, will tend to find a more desirable spouse, and will earn 20 percent more than those of less selective universities—every year for the rest of their working lives.
Yes, there is some moderate benefit, holding “qualifications constant”–I guess their vacations can last 20% longer and their dinners can be 20% more expensive on average. The point is that qualifications are NOT constant. The variance within the cohort at a given selective university is enormous, dwarfing this 20 percent average benefit. The fact is that we just don’t know what makes a kid ultimately successful or not. We can go with standardized testing or the current system or some other system based on marshmallow tests or what have you, but ultimately we just have no idea. Unz assembles evidence that Caltech is more meritocratic, but so far there seems to be little evidence that the world is run by our brilliant Caltech-trained overlords.

What to do, then? How about nothing? Quoting Aaronson:
Some people would say: so then what’s the big deal? If Harvard or MIT reject some students that maybe they should have admitted, those students will simply go elsewhere, where—if they’re really that good—they’ll do every bit as well as they would’ve done at the so-called “top” schools. But to me, that’s uncomfortably close to saying: there are millions of people who go on to succeed in life despite childhoods of neglect and poverty. Indeed, some of those people succeed partly because of their rough childhoods, which served as the crucibles of their character and resolve. Ergo, let’s neglect our own children, so that they too can have the privilege of learning from the school of hard knocks just like we did. The fact that many people turn out fine despite unfairness and adversity doesn’t mean that we should inflict unfairness if we can avoid it.
A fair point, but one that ignores a few things. Firstly, going to Cornell instead of Harvard is hardly the same thing as living a childhood of neglect and poverty. Secondly, universities compete. If another university can raise their profile by admitting highly meritorious students wrongly rejected by Harvard, well, then so be it. Those universities will improve and we’ll have more good schools overall.

Which feeds into the next, more important point. As I said, it’s not at all clear to me that we have any idea how to select for “success” or “ability”, especially for kids coming out of high school. As such, we have no idea where to apportion our educational resources. To me, the solution is to have as many resources available as broadly as possible. Rather than focusing all our resources and mental energy into "getting it right" at Harvard and MIT, I think it makes much more sense to spend our time making sure that the educational level is raised at all schools, which will ultimately benefit far more people and society in general. The Pinker/Aaronson view essentially is that this is a “waste” of our resources on those not “deserving” of them based on merit. I would counter first that spending resources on educating anyone will benefit our society overall, and second that all these “merit” metrics are so weakly correlated with whatever the hell it is that we’re supposedly trying to select for that concentrating our resources on the chosen few at elite universities is a very bad idea, regardless of how we select those folks. The goal should be to make opportunities as widely available as possible so that we can catch and nurture those special folks out there who may not particularly distinguish themselves by typical metrics, which I think is the majority, by the way. A quick look at where we pull in graduate students from shows that the US does a reasonably good job at this relative to other places, a fact that I think is related to many of this country’s successes.

As I said before in the context of grad admissions, if you want to figure out who runs the fastest, there are a couple ways of going about it. You can measure foot size and muscle mass and whatever else to try to predict who will run fastest a priori–good luck with that. Or you can just have them all run in a race and see who runs the fastest. And if you want to make sure you don’t miss the next Usain Bolt or Google billionaire, better make the race as big and inclusive as possible.

Saturday, September 13, 2014

Greatest molecular biologist of all time?

Serena Williams just won her 18th grand slam title, and while I’m not a super knowledgeable tennis person, I think it’s fair to say that she’s the best female tennis player ever. Of course, in these discussions, it always comes down to what exactly one means by best ever. Is it the one who, at peak form, would have won head to head? Well, in that case, I doubt there’s much contest: despite whatever arguments about tennis racket technology improvement, Serena would likely crush anyone else. Is it the most dominant in their era? Is it the one who defines an era, transforming their sport? (Serena wins on these counts as well, I think.)

“Who is the greatest” is a common (and admittedly silly) pastime that physicists and mathematicians tend to play that has many of the same elements as sports (Newton and Gauss, respectively, if I had to pick). Yet curiously, molecular biology doesn’t have quite as much of this. There are certainly heroes (mythical and real) in the story of molecular biology, but there is much less of the absolute deification that you will find at the math department’s afternoon tea. Why?

I think there’s a couple of reasons, but one of the big ones is that the golden era of molecular biology has come much more recently in history than that of math and physics. And recent history is different than ancient history in one very important respect: there are just WAY more people. This means that it’s just that much harder nowadays for someone to come up with a good idea and develop it all entirely by themselves. In the time of Newton, there were just not a lot of trained scientists around, and even then, Leibniz came up with calculus around the same time. Imagine the same thing today. Let’s say you formulated the basic ideas of calculus. Your idea would travel across the internet instantaneously to a huge number of smart mathematicians and for all you know, all the ramifications would get worked out within a very short period of time, perhaps even on a blog. Indeed, think about how many mathematical results from the old days would be worked out by one person: Maxwell’s equations, Einstein’s theory of relativity, Newton’s laws of motion. Nowadays, mathematical ideas tend to have many names attached, like Gromov-Witten invariants, Chern-Simons theory, etc. Einstein’s general theory of relativity is perhaps an example of this transition: I think I read somewhere that Hilbert actually worked out all the math, but waited for Einstein to work it out out of respect. Similarly, quantum mechanics has so many brilliant names associated with it that we can’t really call it “Dirac theory” or “Feynman theory”. It’s just very hard for any one person to develop an idea completely on their own these days.

This is the era that molecular biology came of age in. As such, there are just so many names associated with the major developments that it’s impossible to ascribe any one big thing to any one person, or even a small set of people. And I think the pace is accelerating even further. For instance, consider CRISPR. It’s clear that it’s something that’s captured the attention of the moment, and I’ve been utterly amazed at how quickly people have adopted and applied it in so many clever contexts seemingly instantaneously.

I think this is actually a wonderful thing about molecular biology and modern science in general. I think the excessive focus on the “genius” deemphasizes that scientific progress is a web of interconnected concepts and findings coming from many sources, and I love thinking about molecular biology in those terms. Although I have to admit that a good old fashioned Newton vs. Einstein debate is a lot of fun!

Sunday, August 31, 2014

My new favorite website

I had a couple posts earlier about the Fermi paradox inspired largely by a post on the website waitbutwhy.com. It's now officially my favorite website. Check out some of these other great posts, many characterized by mind-blowing depictions of scale:







Plus other fun stuff about baby names, trips to Japan and Russia, etc. Anyway, great site!

Saturday, August 30, 2014

Do I want molecular biology to turn into physics?

I just read this little Nature News feature about the detection of low-energy neutrinos produced during the fusion reactions that power the Sun. I don't know much nuclear physics, but the article says that fusion in the sun involves the fusion of two hydrogen molecules into deuterium, and the conversion of one proton into a neutron leads to the production of low energy neutrinos, but nobody has been able to detect them because their low energy leads to their signal getting swamped out.

The experiment itself sounds incredible:
The core of the Borexino experiment features a nylon vessel containing 278 tonnes of an ultrapure benzene-like liquid that emits flashes of light when electrons are scattered by neutrinos. The liquid was derived from a crude-oil source nearly devoid of radioactive carbon-14, which can hide the neutrino signal. The detector fluid is surrounded by 889 tonnes of non-scintillating liquid that shields the vessel from spurious radiation emitted by the experiment's 2,212 light detectors.
That is some BIG science! Clearly an experimental triumph.

But it's also a triumph of theory. Theory in physics is so amazing. Can't tell you how many articles about physics have some version of the following line:
While the detection validates well-established stellar fusion theory, future, more sensitive versions of the experiment could look for deviations from the theory that would reveal new physics.
Not only is theory qualitatively right in physics, it's usually provides quantitative predictions with an accuracy that seem utterly preposterous from the perspective of a molecular biologist:
Borexino can measure the flux of low-energy neutrinos with a precision of 10%. Future experiments could bring that down to 1%, providing a demanding test of theoretical predictions and thus potentially uncovering new physics.
For fellow quantitative biologists, it is beyond our wildest fantasies to have strongly predictive theories and models like these. Usually, we're pretty happy if we can get the sign of the prediction correct! That said, there is also something troubling in these articles, which are all the little hints like "more sensitive... could... reveal new physics". I think the operative word is "could". Mostly, it feels like our understanding of many fundamental physical processes is so deep and so accurate that there aren't many surprises left out there (except for that whole dark matter/energy thing...).

I think I'm overall really happy that biology has surprises coming out all the time that make us reconsider our basic understanding of how things work. I do think that molecular biology does suffer a bit from "let's find the latest molecule that breaks the dogma" syndrome instead of focusing more effort on systematizing what we do know, but for the most part, I love the energy that comes from the huge amount of unknowns in the field and the huge challenge that comes with the effort to make molecular biology as predictive physics. Of course, there's no reason to believe a priori that such predictivity is possible. But that's what makes it fun!

Saturday, August 23, 2014

Is academia really broken? Or just really hard?

(Second contrarian post in a row. Need to do some more positive thinking!)

Scarcely a day goes by when I don’t read something somewhere on the internet about how academia is broken. Usually, this centers around peer review of papers, getting an academic job, getting grants and so forth. God knows I’ve contributed a fair amount of similar internet-fodder myself. And just for the record, I absolutely do think that many of the systems that we have in place are deeply flawed and could do with a complete overhaul.

But what do all these hot-button meta-science topics have in common? Why do they engender such visceral reactions? I think they are all about the same basic underlying issue, namely competition for limited resources (spots in high impact journals, academic jobs, grant funding). I think we can and should fix the processes by which these resources are apportioned. But there’s also no getting around the fact that there are limited resources, and as such, there will be a large number of people dissatisfied with the results no matter what system we choose to use.

Take peer review of papers. Colossal waste of time, I agree. Personally, the best system I can envision is one where everyone publishes their work in PLOS ONE or equivalent with non-anonymous review (or, probably better, no review), then “editors” just trawl through that and publish their own “best of” lists. I’m sure you have a favorite vision for publishing, too, and I’m guessing it doesn’t look much like the current system–and I applaud people working to change this system. In the end, though, I anticipate that even if my system was adopted, everyone (including me) would still be complaining about how so and so hot content aggregator is not paying attention to their own particular groundbreaking results they put up on bioRxiv. The bottom line is that we are all competing for the limited attentions of our fellow scientists, and everyone thinks their own work is more important than it probably is, and they will inevitably be bummed when their work is not recognized for being the unique and beautiful snowflake that they are so sure it is. Groundbreaking, visionary papers will still typically be under-recognized at the time precisely because they are breaking new ground. Most papers will still be ignored. Fashionable and trendy papers will still be popular for the same reason that fashionable clothes are–because, umm, that’s the definition of fashion. Politics will still play a role in what people pay attention to. We can do pre-publication review, post-publication review, no review, more review, alt-metrics, old-metrics, whatever: these underlying truths will remain. It’s worth noting that the same sorts of issues are present even in fields with strong traditions of using pre-print servers and far less fetishization of publishing in The Glossies. I think it's the fear and heartbreak associated with rejection by one's peers (either by reviewers or by potential readers) that is the primary underlying motivation for people to consider alternative approaches to publishing–it certainly is for me. We should definitely consider and implement alternatives, but I think it's worth considering that the anguish that comes from nobody appearing to appreciate your work will always be present because other people's attention is a limited and precious resource that we are all fighting for. [Update 8/25: same points made here and here by Jeremy Fox]

For trainees, the other “great filter event” they probably experience is getting a faculty position. Yes, the system is probably somewhat broken (in particular with gender/racial disparities that we simply must address), although compared to peer review of papers, search committees are far more deliberate in their decision making, precisely because the stakes are so much higher. Yes, we can and should encourage and support students considering other career paths. I guess what I’m saying is that even if everyone went into science with their eyes wide open, with all the best mentoring in the world, the reality is that there are more dreamers than dream jobs available. That means many people who feel like they deserve such a position (and certainly many of them do) are not going to get one. And they probably won’t be happy about it.

(Sidebar about career path stuff: to be frank, most of the trainees I’ve met are pretty realistic about their chances of getting a faculty position and have many other plans they are considering as well, and so I think some of the “I’m not getting support and advice about other career choices” meme is overblown, especially these days. We can blame the “system” for somehow making it seem like doing something other than academics is a failure, and there is definitely some truth to that. At the same time, I think it’s fair to say that many people do a PhD because being a scientist was a long-held dream from childhood, and so if we’re being totally honest, at least some of the sense of failure comes from within. It’s a lot easier to say abstractly that we should be realistic with trainees and manage expectations and so forth than to actually look someone in the eye and tell them to their face that they should give up on their dream. I agree that this is the sort of hard stuff PIs should do as part of their jobs–I’m just saying it’s not as easy as it is sometimes made out to be. And yes, I’ve personally experienced both sides of this particular coin.)

Look, nobody likes this stuff. Rejecting is about as much fun as being rejected, and I FULLY support all efforts to make our scientific processes better in every possible way. All I’m saying is that even the best, most utopian system we can think of will suffer from inequities, politics, fashions, etc. because that is just human nature. The current systems are currently largely run by scientists, after all, and so we really have nobody to blame but ourselves. I realize it’s much easier to blame Spineless Editor From Fancy Journal, Nasty Reviewer with a Bone to Pick, Crusty Old Guy on the Hiring Committee, or Crazy Grant Reviewer with a Serious Mental Health Issue, and I’ve for sure blamed all those people myself when I have failed at something. Maybe I was right, or maybe I was wrong. I’m pretty sure it’s mostly a rationalization that lets me keep my chin up in what can sometimes be a fairly demoralizing line of work. Science is a human endeavor. It will be as good and as bad as humans are. And when the chips are down and there’s not enough to go around, that can bring out both the best and the worst in us.