Monday, April 25, 2016

Why I gave your paper a Strong Accept

See also: Why I gave your paper a Strong Reject

I know this blog is mostly about me complaining about academics, but there's a reason I stay engaged with the research community: I learn stuff. Broadly speaking, I think it's incredibly important for industry to both stay abreast of what's going on in the academic world, as well as have some measure of influence on it. For those reasons, I serve on a few program committees a year and do other things like help review proposals for Google's Faculty Research Award program.

Apart from learning new things, there are other reasons to stay engaged. One is that I get a chance to meet and often work with some incredible colleagues, either professors (to collaborate with) or students (to host as interns and, in many cases, hire as full-time employees later on).

I also enjoy serving on program committees more than just going to conferences and reading papers that have already been published. I feel like it's part of my job to give back and contribute my expertise (such as it is) to help guide the work happening in the research community. Way too many papers could use a nudge in the right direction by someone who knows what's happening in the real world -- as a professor and grad student, I gained a great deal from my interactions with colleagues in industry.

Whenever I serve on a program committee, I make it a point to champion at least a couple of papers at the PC meeting. My colleagues can attest to times I've (perhaps literally) pounded my fist on the table and argued that we need to accept some paper. So to go along with my recent post on why I tend to mark papers as reject, here are some of the reasons that make me excited to give out a Strong Accept.

(Disclaimer: This blog represents my personal opinion. My employer and my dog have nothing to do with it. Well, the dog might have swayed me a little.)

The paper is perfect and flawless. Hah! Just kidding! This never happens. No paper is ever perfect -- far from it. Indeed, I often champion papers with significant flaws in the presentation, the ideas, or the evaluation. What I try to do is decide whether the problems can be fixed through shepherding. Not everything can be fixed, mind you. Minor wording changes or a slight shift in focus are fixable. Major new experiments or a total overhaul of the system design are not. When I champion a paper, I only do so if I'm willing to be on the hook to shepherd it, should it come to that at the PC meeting (and it often does).

Somebody needs to stand up for good papers. Arguably, no paper would ever get accepted unless some PC member were willing to go to bat for it. Sadly, it's a lot easier for the PC to find flaws in a paper (hence leading to rejection) than it is to stand up for a paper and argue for acceptance -- despite the paper's flaws. Every PC meeting I go to, someone says, "This is the best paper in my pile, and we should take it -- that's why I gave it a weak accept." Weak accept!?!? WEAK!?! If that's the best you can do, you have no business being on a program committee. Stand up for something.

In an effort to balance this out, I try to take a stand for a couple of papers every time I go to a PC meeting, even though I might not be successful in convincing others that those papers should be accepted. Way better than only giving out milquetoast scores like "weak accept" or -- worse -- the cop-out "borderline".

The paper got me excited. This is probably the #1 reason I give out Strong Accepts. When this happens, usually by the end of the first page, I'm getting excited about the rest of the paper. The problem sounds compelling. The approach is downright sexy. The summary of results sound pretty sweet. All right, so I'm jazzed about this one. Sometimes it's a big letdown when I get into the meat and find out that the approach ain't all it was cracked up to be in the intro. But when I get turned on by a paper, I'll let the small stuff slide for sure.

It's hard to predict when a paper will get me hot under the collar. Sometimes it's because the problem is close to stuff I work on, and I naturally gravitate to those kinds of papers. Other times it's a problem I really wish I had solved. Much of the time, it's because the intro and motivation are just really eloquent and convincing. The quality of writing matters a lot here.

I learned a lot reading the paper. Ultimately, a paper is all about what the reader takes away from it. A paper on a topic slightly out of my area that does a fine job explaining the problem and the solution is a beautiful thing. Deciding how much "tutorial" material to fit into a paper can be challenging, especially if you're assuming that the reviewers are already experts in the topic at hand. But more often than not, the PC members reading your paper might not know as much about the area as you expect. Good exposition is usually worth the space. The experts will skim it anyway, and you might sell the paper to a non-expert like me.

There's a real-world evaluation. This is not a requirement, and indeed it's somewhat rare, but if a paper evaluates its approach on anything approximating a real-world scale (or dataset) it's winning major brownie points in my book. Purely artificial, lab-based evaluations are more common, and less compelling. If the paper includes a real-life deployment or retrospective on what the authors learned through the experience, even better. Even papers without that many "new ideas" can get accepted if they have a strong and interesting evaluation (cough cough).

The paper looks at a new problem, or has a new take on an old problem. Creativity -- either in terms of the problem you're working on, or how you approach that problem -- counts for a great deal. I care much more about a creative approach to solving a new and interesting (or old and hard-to-crack) problem than a paper that is thoroughly evaluated along every possible axis. Way too many papers are merely incremental deltas on top of previous work. I'm not that interested in reading the Nth paper on time synchronization or multi-hop routing, unless you are doing things really differently from how they've been done before. (If the area is well-trodden, it's also unlikely you'll convince me you have a solution that the hundreds of other papers on the same topic have failed to uncover.) Being bold and striking out in a new research direction might be risky, but it's also more likely to catch my attention after I've reviewed 20 papers on less exciting topics.


Wednesday, April 20, 2016

Why I gave your paper a Strong Reject

Also see: Why I gave your paper a Strong Accept.

I'm almost done reviewing papers for another conference, so you know what that means -- time to blog.

I am starting to realize that trying to educate individual authors through my witty and often scathing paper reviews may not be scaling as well as I would like. I wish someone would teach a class on "How to Write a Decent Goddamned Scientific Paper", and assign this post as required reading. But alas, I'll have to make do with those poor souls who stumble across this blog. Maybe I'll start linking this post to my reviews.

All of this has probably been said before (strong reject) and possibly by me (weak accept?), but I thought I'd share some of the top reasons why I tend to shred papers that I'm reviewing.

(Obligatory disclaimer: This post represents my opinion, not that of my employer. Or anyone else for that matter.)

The abstract and intro suck. By the time I'm done reading the first page of the paper, I've more or less decided if I'm going to be reading the rest in a positive or negative light. In some cases, I won't really read the rest of the paper if I've already decided it's getting The Big SR. Keep in mind I've got a pile of 20 or 30 other papers to review, and I'm not going to spend my time picking apart the nuances of your proofs and evaluation if you've bombed the intro.

Lots of things can go wrong here. Obvious ones are pervasive typos and grammatical mistakes. (In some cases, this is tolerable, if it's clear the authors are not native English speakers, but if the writing quality is really poor I'll argue against accepting the paper even if the technical content is mostly fine.) A less obvious one is not clearly summarizing your approach and your results in the abstract and intro. Don't make me read deep into the paper to understand what the hell you're doing and what the results were. It's not a Dan Brown novel -- there's no big surprise at the end.

The best papers have really eloquent intros. When I used to write papers, I would spend far more time on the first two pages than anything else, since that's what really counts. The rest of the paper is just backing up what you said there.

Diving into your solution before defining the problem. This is a huge pet peeve of mine. Many papers go straight into the details of the proposed solution or system design before nailing down what you're trying to accomplish. At the very least you need to spell out the goals and constraints. Better yet, provide a realistic, concrete application and describe it in detail. And tell me why previous solutions don't work. In short -- motivate the work.

Focusing the paper on the mundane implementation details, rather than the ideas. Many systems papers make this mistake. They waste four or five pages telling you all about the really boring aspects of how the system was implemented -- elaborate diagrams with boxes and arrows, detailed descriptions of the APIs, what version of Python was used, how much RAM was on the machine under the grad student's desk.

To first approximation, I don't care. What I do care about are your ideas, and how those ideas will translate beyond your specific implementation. Many systems people confuse the artifact with the idea -- something I have blogged about before. There are papers where the meat is in the implementation details -- such as how some very difficult technical problem was overcome through a new approach. But the vast majority of papers, implementation doesn't matter that much, nor should it. Don't pad your paper with this crap just to make it sound more technical. I know it's an easy few pages to write, but it doesn't usually add that much value.

Writing a bunch of wordy bullshit that doesn't mean anything. Trust me, you're not going to wow and amaze the program committee by talking about dynamic, scalable, context-aware, Pareto-optimal middleware for cloud hosting of sensing-intensive distributed vehicular applications. If your writing sounds like the automatically-generated, fake Rooter paper ("A theoretical grand challenge in theory is the important unification of virtual machines and real-time theory. To what extent can web browsers be constructed to achieve this purpose?"), you might want to rethink your approach. Be concise and concrete. Explain what you're doing in clear terms. Bad ideas won't get accepted just because they sound fancy.

Overcomplicating the problem so you get a chance to showcase some elaborate technical approach. A great deal of CS research starts with a solution and tries to work backwards to the problem. (I'm as guilty of this, too.) Usually when sitting down to write the paper, the authors realize that the technical methods they are enamored with require a contrived, artificial problem to make the methods sound compelling. Reviewers generally aren't going to be fooled by this. If by simplifying the problem just a little bit, you render your beautiful design unnecessary, it might be time to work on a different problem.

Figures with no descriptive captions. This is a minor one but drives me insane every time. You know what I mean: A figure with multiple axes, lots of data, and the caption says "Figure 3." The reviewer then has to read deep into the text to understand what the figure is showing and what the take-away is. Ideally, figures should be self-contained: the caption should summarize both the content of the figure and the meaning of the data presented. Here is an example from one of my old papers:


Isn't that beautiful? Even someone skimming the paper -- an approach I do not endorse when it comes to my publications -- can understand what message the figure is trying to convey.

Cursory and naive treatment of related work. The related work section is not a shout-out track on a rap album ("This one goes out to my main man, the one and only Docta Patterson up in Bezerkeley, what up G!"). It's not there to be a list of citations just to prove you're aware of those papers. You're supposed to discuss the related work and place it in context, and contrast your approach. It's not enough to say "References [1-36] also have worked on this problem." Treat the related work with respect. If you think it's wrong, say so, and say why. If you are building on other people's good ideas, give them due credit. As my PhD advisor used to tell me, stand on the shoulders of giants, not their toes.

Startup Life: Three Months In

I've posted a story to Medium on what it's been like to work at a startup, after years at Google. Check it out here.