TBT: Of monsters and developmental mysteries

Once upon a time, I applied for an internship at my local public radio station. While I didn’t end up getting it, I sent them this writing sample that gives an informal explanation of developmental biology, which I stumbled across today. Since I am a freshly-minted PhD as of this month (yay!), and still love development after devoting most of the last 10 years to studying it, I feel that my thoughts on development from several years ago still apply. So, TBT, here’s my waxing poetic on the underpinnings of all multicellular organisms:

Few things are as fascinating as creatures mistakenly born with a single central eye, or those with too few or too many fingers on each hand. How did they get to be that way? When things go so drastically wrong, what is the culprit, and should we care? It would be easy to dismiss such anomalies as freaks of nature. After all, they are exceedingly rare. But in the study of developmental biology these monsters are the key to understanding why most living things aren’t that way, and how they form correctly. The mistakes show us not only who we are, but why we are.

Every living thing, whether it’s plant, animal or fungus, has an instruction manual that tells how to make more of it. This manual, known as the genome, contains the code for every building material, every map of where things go, and every step needed to make and maintain a creature. The manual has a section (called a gene) on how to make a molecule named Sonic Hedgehog, which looks nothing like the video game character. However, this molecule and the instructions telling where to make it are crucially responsible for ensuring that a creature has two eyes on its face and five digits on its hand. Without Sonic Hedgehog, creatures are born cyclopic, with a single central eye, or with only one digit on each limb. Change the instruction manual and you’ll change how or what gets made. These changes, called mutations, are often disastrous. But every so often a creature is born with a mutation that helps it, gives it a little superpower over its peers, so that it’s a little more successful at reproducing. When this happens the mutation is passed along to the progeny of the original mutant, and those progeny are more successful than their peers, and so on until the mutation becomes present in every member of the species, or until a new species entirely breaks off on its own. Mutation, the anarchy of the genome, is responsible for the incredible diversity among living things.

Developmental biologists love firstly to break things, and secondly to hijack them. We use mutation, nature’s own act of rebellion and innovation, to control development so that we can study it in animals like mice and chickens. Remember how the instruction manual includes a map for where things go? This applies to Sonic Hedgehog. By destroying, via mutation, the part of the manual that says where to put Sonic Hedgehog in the growing limb, we can block the growth of digits there. On the other hand (no pun intended), if we duplicate the map for where to put Sonic Hedgehog and put the copy on the other side of the limb, we can cause extra digits to sprout in mirror image to the original ones. By creating this controlled disorder developmental biologists may be making monsters, but the monsters let us answer the deepest questions about ourselves.

By looking for the misfits, by breaking the system in myriad ways, developmental biologists seek to understand. We study chaos so that we can see what order is supposed to look like, and so that we can recognize and fight chaos effectively when it knocks on our doors in the form of diseases like cancer. And sometimes we study chaos simply because we’re drawn to it, because it’s a fascinating corruption of ourselves. We study mutants because that’s what we are.

 

Featured image: my favorite model organism.

New BSR blog: The Moral Responsibility of Genome Editing

Hey all, I’ve been busy with so many things over the last few months, one of which was writing this blog post for the Berkeley Science Review on the ethics of genome editing. We’re approaching an era where we can relatively easily edit genes in the cultured cells of sick patients, or in patients’ cells within their bodies, or even in early embryos— this brings up a whole host of ethical issues to consider. Check out the article for a discussion of these issues!

Weaponizing the scientific method

We’re experiencing an unprecedented level of lying in American politics right now, mostly courtesy of our “so-called” president. What is there to do about this? Right now it seems like no amount of calling out bullshit stops Trump from baldly lying, or his supporters from accepting the lies, or congressional Republicans from spinelessly hiding in a corner pretending it’s not happening. Trump is doubling down on his lies and lashing out at the media for exposing him, which threatens our free press and our ability to fight back against his wannabe-authoritarian regime. I’ve talked about how hopeless this makes everything seem before, but also how we don’t have much of a choice except to fight back. We are still scrambling to figure out the best way to fight.

As a scientist and someone who prefers to rely on logic to make decisions, I believe that one way to do this is by teaching people to get comfortable using the scientific method all the time. It’s not really that hard once you train yourself to think that way and apply it to all kinds of situations, even those that occur outside of the lab. So what is the scientific method? In a nutshell, it’s a set of principles used to conduct evidence-based inquiry. It’s founded in the idea that you can make an observation (“X phenomenon occurs”), extrapolate a hypothesis to explain the observation (“Y is acting on Z to cause X phenomenon”), and then do tests to find out if your hypothesis is correct or not (“If I remove Y, does X phenomenon still occur?”). A good scientist will form multiple possible hypotheses, including null hypotheses (“Y is not acting on Z to cause X” or “Y and Z have nothing to do with X”), and set up tests that will generate observations to address each hypothesis. At the root of the scientific method is thinking of multiple possible scenarios and applying skepticism to all of them, as opposed to just accepting the first one you think of without questioning it further. Oftentimes you come up with a hypothesis that makes logical sense and is the simplest explanation for a phenomenon (i.e. it’s parsimonious), but when you investigate it further you find that it is not at all correct.

Let’s apply this to politics today. Here’s a basic example: Trump lost the popular vote by nearly 2.9 million votes, but he keeps claiming that this is due to massive voter fraud where millions of people voted illegally. You could just take his claim at face value and not question it. After all, he is the president, so he’s privy to lots of information the general public doesn’t have, and as the leader of the U.S. he should be acting with integrity, right? You could use this logic to form the hypothesis that he is correct and there is evidence of massive voter fraud. Or you could apply a bit of skepticism and formulate an alternative hypothesis: Trump’s claim is false, and there’s no evidence of the massive voter fraud that he cites. How to test this? You can look for the evidence that he claims exists just by Googling… and you won’t actually find any. What you will find are instances of his surrogates claiming that the voter fraud is an established fact, various reputable news agencies and fact checkers debunking the claim (even right-leaning Fox News admits there is no evidence for the claim), and a complete dearth of any official reports that massive voter fraud occurred (which I would link to, but I can’t link to something that doesn’t exist). In support of the claim you will find right-wing conspiracy websites like InfoWars that don’t cite any actual evidence. So based on that inquiry, you could conclude that Trump’s claim is false. It didn’t take much skepticism or effort to address the question, just enough to ask “Can I easily find any solid evidence of this?”

One problem with this strategy: we aren’t doing a great job at teaching people how to think rationally and critically. On a large scale, we would do this by refocusing our education standards around critical thinking (part of what Common Core and Next Generation Science Standards aim to do, albeit with mixed results over the methods and implementation with which they teach the scientific method/critical thought). There has been a lot of pushback to this, particularly in conservative areas. Why? One explanation could be general distrust of government and antipathy towards regulations/standards demanded by the federal government. Another could be over-reliance on religion, which fundamentally demands faith in the absence of evidence. I’m not going to fully wade into this murky debate right now, and before I do I’ll say that not all religion is bad, and it’s not true that no religions teach their followers to think critically and analyze things. But some religious groups don’t teach these principles, and they rely much more on encouraging their followers to just believe what they’re told by their pastor (or whatever religious leader), or face moral doom. This dangerously reinforces the idea that it’s okay to just blindly follow certain authority figures without question. It extends past the church doors, to the school teacher slut-shaming high school girls instead of providing them with comprehensive sex education, to the public official who expresses skepticism over climate change despite an abundance of evidence that it’s happening and caused by humans. People believe what these authority figures say. We’re seeing it now, as more than half of Republicans accept Trump’s claim that he really won the popular vote, with the percentage being higher among Republicans with less education. One of the main reasons we hear from Trump supporters why they voted for him was his tough stance on immigration, and they seem to be happy with his controversial travel ban, even though just recently the Department of Homeland Security found that people from the countries targeted by the ban pose no extraordinary threat compared to people from other Muslim-majority countries (Trump rejected this report, even though it was ordered by the White House). This underscores the importance of providing people with an opportunity to learn and practice critical thought. Rather than putting in a tiny bit of effort to look for the evidence that those claims are true, or thinking about their news source to figure out if it’s biased or not, they blindly trust what Trump says.

Fixing the education system to provide more training in critical thought and use of the scientific method is absolutely necessary long-term, but right now we need a strategy to deal with people who don’t have that training. So if you have conservative friends or family and you’re brave enough to talk politics with them, ask them why they believe things to be true. If they cite something as evidence that isn’t rigorous, ask them why they trust that source. I think it’s possible to do this without talking down to them; most people are probably capable of applying logic in their thinking even if they haven’t been trained to do so. Instead of trying to argue that someone is wrong and back it up by saying “here’s a fact to support my argument and you should believe it because it’s true” (even if that’s accurate), ask them why they think your fact isn’t true, and respectfully lead them to your evidence to back up your argument. Perhaps it won’t work all the time, but if the seed of skepticism can be planted in at least some people, they may be more careful in their voting decisions in the future. If you rely on logic and rational skepticism to make your decisions, you have an obligation to help other people do the same. It’s worth a try.

 

Featured image: Barbara Lee’s town hall, February 18th, Oakland.

Open access and peer review, in a nutshell

Let’s talk about something scientific.

One of the key underpinnings of the scientific process is the ability to share research results with others. Before scientists do this, we share our results with each other to get feedback on our work and suggestions on what other experiments we can do to provide more solid evidence for our claims*. This is a fundamental part of the publishing process and known as peer review, which is basically just scientists checking each other’s work. Who is more qualified to do this than other scientists in the field? If you’ve looked at an article published in a scientific journal recently, you might notice that 1) it’s very dense, and 2) the techniques used, and often the questions asked, are pretty complex. It would likely be hard for someone with no scientific training, or even a scientist from a different field, to provide useful critique, or to spot things the authors may have overlooked. So when we submit our manuscripts to journals for review, we try to have them reviewed by other scientists in our sub-field who are most familiar with the techniques and questions discussed in the manuscript (and thus, the benefits and pitfalls of what we discuss).

If scientists didn’t rely on peer review, we’d be able to publish just about anything and claim it to be fact, and then it would be up to the general public to critique it and spread the word about whether or not the results are valid. That would just be inefficient, and highly unlikely to succeed. Peer review acts as both a filter and a stamp of approval**.

After a manuscript is peer-reviewed and published in a journal, that information is theoretically available to the public and part of the established scientific knowledge base. But scientific research isn’t truly available to the public unless it’s actually accessible. Many journals are behind what’s known as a pay wall, where you have to subscribe in order to access the content beyond the abstract (a summary of what an article is about). This is similar to how the New York Times charges $2.75/week for digital access. The difference is that the subscription costs of many journals are exceedingly high, such that most individual people can’t afford a subscription, let alone multiple subscriptions to different journals. Scientists can usually access these articles because they work for a university or company that shoulders the cost of subscriptions to many journals, but depending on how well-funded your employer is, the cost may still be prohibitive.

Why is this a problem? There’s the obvious issue of forcing published science into a black box that remains mysterious to the general public, which helps to feed the perception that scientists’ work is beyond the reach of “normal people” and blocks public interest in all but the sexiest or weirdest stories. There’s also the fact that a majority of scientific research is paid for by the government, which uses taxpayer money to fund grants. So taxpayer funds are going to facilitate scientific research, but then most taxpayers can’t actually read about the research they paid for. If the research isn’t even made available to all scientists, it prevents future scientific progress (what do scientists have to build on if they don’t know the current state of the field?) This is where the idea of open access comes in.

Some publications are open access, like the PLOS journals and eLife, and these publications do not require a reader to pay to view their articles. Other publications, like Nature and Science, charge a subscription fee. Nature‘s fee is $3.90/issue. Perhaps that sounds on par with subscriptions to non-scientific magazines and newspapers, but keep in mind one fundamental difference: you can get the news from multiple sources, so if something important happens, several news agencies will report on the same story, and you don’t necessarily need to pay for it. With scientific publications, the research article will only be published in one journal, so to access all research as it comes out, you’d have to pay for a subscription to many different journals. It adds up quickly, and effectively leads to people paying twice for scientific research (assuming they already pay taxes).

Together, peer review and open access are fundamental to scientists’ ability to share our work with the public, demonstrate convincingly that our findings are accurate, and allow non-scientists to engage in scientific research. Attempts to limit this are wholly detrimental to the scientific process and public understanding of science. Last month, the Trump administration ordered a media blackout on several government agencies including the EPA, and also indicated that research from EPA scientists would need to be approved by the Trump administration so as to “reflect the new administration”. The Trump administration is not run by scientists, and it’s unclear who in the administration would be reviewing scientific results. This amounts to unqualified, politically motivated people deciding based on their agenda what science gets published—clearly problematic and fundamentally counter to widely-held standards of scientific integrity.

Regardless of who is in office, scientists should be working to improve peer review and the general public’s access to scientific research. On top of that, we should work to help non-scientists understand the process of doing research and the lengths we go to in order to demonstrate that, to the best of our knowledge, our findings are accurate. Without this line of communication, we will be forever holed up in our ivory towers, piddling away on experiments that will never make as great of an impact as they should, because people either cannot hear or cannot understand us.

~~

*Or to find out if our claims really are accurate; sometimes you do another experiment and it demonstrates that what you thought was an interesting phenomenon is actually just noise, or that it’s less significant than you thought.

**It’s not a perfect system, of course. Peer-reviewed results do get published that are eventually shown to be false upon further testing, or sometimes after it’s discovered that data was fudged. An ideal system would have scientists acting with integrity 100% of the time. But like every other field, people are sometimes deceptive, and do things that undermine the system when it looks like it will benefit them. Sometimes we publish results that we think are correct, but later advances in the field or attempts to repeat an experiment show that those results are not correct. This is an ongoing struggle, and peer review is one of the things that combats this.

Featured image: Berkeley neighborhood flower.

 

The first one.

Like many people, I’ve decided that this year I’d like to start writing more. I’m (hopefully) nearing the end of my PhD and I’ve been thinking about where I want to go next, and what I’m gravitating towards pretty much always involves communicating complex scientific topics to non-scientists. More generally, we’re heading into a time where people will need to be more aware of what’s going on in the world, to think critically about it and speak out about it. Our bullshit detectors should be high, as Jon Stewart called for. So here I am, resolving for this year to blog once a week minimum. I’ll do what I can to contribute to the necessary discourse between scientists and the general public. I’ll look for cool new scientific developments to share, and keep an eye out for media misrepresentation (it happens a lot, and is truly cringeworthy). Most of all, I’ll be on the watch for attempts to stall progress, both scientific and otherwise, by those who don’t understand it and/or are being paid to keep it from happening.

If there’s any lesson we should glean from 2016, it’s that rational, empathetic people cannot sit by and expect progress to happen on its own. We need to take an active part in it. So many of us have the ability to do so— whether it’s due to experiences in our personal life, or our career background, we should use our experiences to provide a logical perspective on issues we can speak to. It’s easy to get caught up in our own hectic lives and use that as an excuse not to participate, but it’s clear now that we have a job to do.

I’m setting a reminder on my phone. Sunday nights, I’m telling myself to write. I’m going to try to discipline myself, make myself take some of that time spent on Facebook and Reddit and use it to be productive. We’ll see how it goes.

(Also, prepare for some unrelated featured images, because I’m still figuring this WordPress thing out, but the default “random raspberries in a mug” picture is not my jam. Here are some San Francisco houses, because in these uncertain times, it helps to occasionally stop and admire lovely things.)