Why do human beings keep getting diseases from bats?

Humans get a surprising number of very infectious diseases from bats. We get SARS (including the recent COVID-19/SARS-CoV2), Ebola, rabies, and possibly mumps. These are all incredibly infectious, deadly diseases.

This seems weird because human beings aren’t in particularly close contact with bats. They’re nocturnal, don’t have large city populations (for the most part), and humans don’t eat them that often. It should be harder for diseases to pass from them to us. They’re also not very similar to us genetically, so their diseases shouldn’t be able to leap to us so easily.

Part of the answer is that bats are very social creatures. When one bat gets a virus, they pretty quickly pass it onto the other bats in their colony. However, that’s also true of goats and cows, who don’t seem to pass on infectious diseases to us as often.

The more important part of the answer is that bats are “reservoirs” of some particularly virulent viruses. Bats live with long-term infections of SARS or Ebola and are seemingly ok with it. While humans and other mammals either have to clear these viruses from their body or die, bats do not. They will just keep on keeping on, sometimes shedding the virus, sometimes not. It’s more likely that the bat will shed the virus during stressful times (i.e. when it’s in a cage and about to get eaten).

That’s what seems to have happened with COVID-19. A bat shed the SARS-CoV2 virus at some point, probably in a wildlife market. The virus at this point was not in state where it could infect humans. However, viruses can both mutate (change shape) and recombine (swap parts) rapidly. Coronaviruses are especially good at recombining.

The SARS-COV2 virus was shed from a bat (possibly from its saliva or droppings), seems to have recombined with a coronavirus in a pangolin (who was probably in a cage right next to it), and then was in a form where it could be transmitted to a human. Once it by chance was in the right form, that virus could successfully spread itself to humans everywhere.

That’s the short version. There’s an interesting question, though: why don’t these viruses kill bats? Ebola, SARS, and rabies all kill their hosts pretty quickly. How can bats live with these viruses year after year?

Well, that’s complicated. This is going to require a dive into the immune system. Before I start, two brief caveats:

  1. When I discuss bats, I’m discussing 1300+ species across almost every continent. Not all bats are the same, and we haven’t really studied most bats. Generalizations are necessary, but just be aware that they’re happening and might not apply to specific bat species. I’m also generalizing across all bat cells, and what’s true for a cell in the abdomen is not going to necessarily be true for a cell in the testes or brain.
  2. When I discuss immunology, I’m discussing an incredibly complicated subject that we still don’t know a ton about. The immune system is the defense force for the entire body, which is a hard enough job in the first place. However, it’s also been in an arms race with bacteria, viruses, and parasites for billions of years, developing defenses, countermeasures, and counter-countermeasures. Discussing the immune system is like discussing season 4,500,000,000 of a TV show that started complicated to begin with.

With those caveats out of the way, let’s start exploring why the bat immune system is so different from humans. Both bats and humans are mammals, which means we have roughly similar immune systems and roughly similar responses to viruses.

When a virus comes into the body, its goal is to invade cells, take over their production capabilities, and use the production capabilities to produce more viruses. Then those viruses produced do the same. It also wants to spread itself outside the body (i.e. by a cough). Viruses want to do this ideally without being noticed by the immune system, and certainly without being disrupted by the immune system.

The body’s goal is, basically, to stop all that. The body wants to stop viruses from coming in. If a virus does come in, the body wants to kill it before it invades any cells. If it does invade a cell, the body wants to know about it immediately. Then it wants to kill that cell and anything inside it. The cell’s role is to let the body know if it’s been invaded, let other cells around it know that it’s been invaded, and to contain the invasion as best as possible.

That is a really, really high level overview. There’s a lot of complexity hidden in there. But, it’s enough that we can dive into the specific difference between bats and humans: the cell’s role in the immune system.

In humans, the individual cell’s role in the immune system is a lot like those “if you see something, say something” posters. Human cells recognize viruses as a sort of lock and key by their pattern recognition receptors. Once they recognize a virus, they start producing interferon.

Interferons interfere with viruses, hence their name. They work as both suppressant and alarm, making it harder to make proteins and RNA (building materials of viruses and human cells), promoting gene p53 (which starts the cells’ self-destruct sequence), alerting the body’s T and NK cells (which kill infected cells), and promoting high temperatures (which make it harder for viruses to replicate).

There are 3 types of interferon: alpha, beta, and gamma. They have a lot of overlap in functionality, but the most important difference is that it seems like alpha acts as the gas in the immune system and beta as the brakes.

That distinction is really important, because that ends up being the key difference between the bat immune system and humans’, as well as the key to why bats can carry these deadly infections.

Bat cells do not work on a “see something, say something” model. Instead, bat cells just continually “say something”. Instead of recognizing viruses and then producing interferon, they continually produce interferon alpha and seem to produce almost no interferon beta: all gas, no brakes.

In other words, bat cells just continually assume they’re under attack and never stop fighting viruses, regardless of whether they’ve detected any. This is surprising. Interferon is a really powerful molecule, and continually producing it should have the same effect on a cell as continually putting a factory on red alert. It should make the cell run much worse, and cause a lot of collateral damage.

After all, when this sort of immune system overreaction happens in humans, humans get serious disorders, like Multiple Sclerosis and Lupus. Bats do not tend to get these. In fact, many bat species live around 20 years on average, which is not only way longer than it should have with its overactive immune system, but is exceptionally long for such a small animal. To give a comparison, rats live a year or two, as do rabbits.

So, how do bats live so long with a hyperactive immune system? Well, the answer seems to be that although their interferon is continually produced, their immune system is never allowed to go to the same extremes as human immune systems.

There’s a couple ways in which they don’t go to extremes. For one, bats seem to lack Natural Killer (NK) cell receptors, which may mean they lack NK cells. NK cells are as heavy duty as their name implies; while their cousins, T cells, kill any cell that displays signs of being infected, NK cells kill any cells that don’t display signs of being not infected. Viruses will frequently prevent cells from indicating that they’re infected, so NK cells just kill any cell that looks like it’s hiding something. Needless to say, this results in a lot of collateral damage.

For another, bat cells also lack a lot of the pathways to go into apoptosis (self-destruct mode). In a human cell, the production of interferon starts readying the cell to self-destruct and stop the virus from using the cell’s machinery. Bat cells lack an associated protein, and seem to have some significant changes at the related p53 gene.

So, bat cells are always ready to fight viruses, but never ready to go the extremes of “kill or be killed” that human and other mammal cells are. This actually works out well for bats. A lot of the damage done in a viral infection is by the overreaction of the immune system in “cytokine storms”, like in the 1918 flu epidemic. Bats avoid all of that.

So, bats just live with the infections instead. They fight them enough that the viruses can’t take over their body, but they don’t clear the infections. This balance can get upset, though, when the bat gets stressed. For instance, when bats get white nose fungus, a really deadly and stressful disease, they also end up with 60-fold higher levels of coronavirus in their intestines.

An added bonus to this is that the lower levels of inflammation in bats might cause their relatively long lifespan by making their biological aging slower. This is an interesting avenue of research for people as well.

Last question, and here’s the most interesting one. Why are bats like this? What made their immune system so weird?

Well, it actually has to do with their flying. Bats are the only mammals that fly. Flying is a really energetic process and can raise bats’ internal body temperature up to 41 degrees Celsius (106 degrees Fahrenheit) for an extended period of time.

That’s really hot. In humans, that would cause serious brain damage. In bats, it’s enough to damage DNA through the production of reactive oxygen species, as well as to release the DNA into the cytoplasm or bloodstream.

This meant obviously that bats had to be really good at regularly repairing their DNA, a tricky process that can lead to cancer. But it also meant that bats couldn’t rely on the classic immune system trick of recognizing foreign pieces of DNA. In other animals, those were likely strands of DNA from a virus or bacteria. In bats, those were likely just pieces of bat DNA that had been damaged and let loose in the wrong place.

Recognition couldn’t work in the same way. So bats’ immune systems decided to be always on, instead. Then, to avoid the problems with that, bats’ immune system also evolved to never reach the same levels of inflammation as other mammals. The end result was that bats were much more able to live with deadly viruses, neither ignoring nor overreacting to them.

Neat, huh?

How to study for the MCAT

This post shared by Trevor Klee, Tutor.

1. Your overall MCAT studying process

a) Start with a diagnostic test. What are your specific strengths and weaknesses? Use the error log app to discern the patterns.

-The error log is like flashcards, but more flexible and better for analytics.

b) If you’re missing content, review the Khan Academy videos for the required information. Employ active review: pause the video, write notes, and form mental connections between what was just covered, what’s been covered, and the overall topic. Do not just watch the videos all the way through like a TV show.

c) Do Khan Academy and AAMC questions to focus on what you’ve reviewed, as well as the content surrounding it. Really try to understand the process of how to solve questions: you’ll find a lot of examples online. Ask yourself why the right answers are right, and the wrong answers are wrong.

Don’t worry about speed, that comes with being confident and fluent in the techniques. As the old Army saying goes, “Slow is smooth and smooth is fast.” Focus on being smooth in your answering process.

d) Once you feel like you’ve covered your initial weaknesses, or you feel confused about about what to do next, take a practice test. Then start with a) again.

e) There are two parts to studying for the MCAT.

One part is like being a marathon runner. You need to put the miles in on the pavement to run a marathon. Anyone can do it, but it takes effort. Doing questions, getting them wrong, and then learning how to do them correctly is the equivalent of putting those miles in. It’s going to suck, but that’s how you learn.

The second part is like being your own coach. You need to reflect on your own progress and what you get wrong and right. What are the patterns in what you get wrong? What techniques do you have difficulty applying?

2. Your MCAT materials

Mandatory

-AAMC Full Length tests

-AAMC Section bank questions

-Khan Academy videos

Flashcards/an error log. The reason I call it an “error log” is that it shouldn’t just be for facts. Anything that you want to remember for test day (like practice problems, diagrams, or techniques) should also go in there.

Optional

-Other AAMC question packs (if you need additional review)

-UWorld question packs (Ditto)

3. Your MCAT study plan

Short MCAT study plan

-Plan for roughly 300 hours of serious studying to get a good score (90th percentile or above)

-So, plan to spend 4 months spending 20 hours a week studying (to give yourself some wiggle room)

-That’s 2 hours a day on weekdays, 5-8 hours a day on weekends (the longer stretches of time are for full length practice tests)

-It’s a lot! But packing it all into a few months is the best way to do it. People get discouraged when they spend a year or two  working on the MCAT, especially when it’s hard to see yourself making improvements week by week. Packing it into a short time prevents that.

Long MCAT study plan

-Here’s a link to a free, detailed 16 week MCAT study plan by Nick Morriss, 99th percentile MCAT tutor.

4. How to study the content tested on the MCAT

This is how you should approach the content for the first and subsequent times

a) Be engaged with the videos. Make sure you are taking notes that aren’t just transcripts of what the video said. Think about the material presented and write it down in your own words.

b) Between 2-6 days after learning/reviewing content for the first time, go back through your notes you’ve taken for a given topic/set of topics. I strongly recommend rewriting them or typing them up- this forces you to take longer to think about what the notes say, while also letting you feel like you’ve accomplished something at the end.

Added bonus: you know have a nicer, neater study guide to draw from if you need to quickly find something later.

c) Take note of content you are struggling with and revisit this 1 week later. You may need to rewatch some videos or look for other explanations if you are can’t figure out why you aren’t understanding. Don’t stress if you feel like it should be easy- it’s a lot of complex information!

d)  Every 4 weeks or so, go back through this content and rewrite key points.

5. When to seek out MCAT tutoring

You might expect a tutor to say, “Seek out tutoring, all the time, for as many hours as possible, no matter what” (as my Dad says, “Don’t ask the barber when you should get a haircut”).

But, this isn’t the case. Or, at least, it’s not what I recommend.

You should seek out tutoring in two cases:

  1. You took a practice MCAT or a real MCAT, and it didn’t go the way you expected or wanted
  1. You’ve been studying for a while, and you’re overwhelmed

In either case, you shouldn’t seek out tutoring until you’ve put in some serious effort on your own. It’ll save your wallet, and give you a better idea of what you can get out of tutoring

You can start your MCAT tutoring journey by emailing me at trevor@trevorkleetutor.com .

How to study for the GRE

This post shared by Trevor Klee, Tutor.

Your overall process when preparing for the GRE

a) Start with a diagnostic test. What are your specific strengths and weaknesses? Use 21st Night to discern the patterns.

If you put the questions you get wrong into the error log app, then head to the analytics section, you’ll get an idea of what exactly you need to focus on next.

b) Do questions to focus on your weaknesses as revealed through the error log app. Really try to understand the process of how to solve questions: you’ll find a lot of examples online. Ask yourself why certain techniques are used, and why your initial instinct may be wrong.

Don’t worry about speed, that comes with being confident and fluent in the techniques. As the old Army saying goes, “Slow is smooth and smooth is fast.” Focus on being smooth in your application of techniques.

c) Once you feel like you’ve covered your initial weaknesses, or you feel confused about about what to do next, take another practice test. Then start with a) again.

d) There are two parts to studying for the GRE.

One part is like being a marathon runner. You need to put the miles in on the pavement to run a marathon. Anyone can do it, but it takes effort. Doing questions, getting them wrong, and then learning how to do them correctly is the equivalent of putting those miles in. It’s going to suck, but that’s how you learn.

The second part is like being your own coach. You need to reflect on your own progress and what you get wrong and right. What are the patterns in what you get wrong? What techniques do you have difficulty applying?

Your materials

Mandatory

-Official GREPrep tests

-21st Night as an error log

-Error log helps you organize yourself, and show you what questions you still need to do, which questions you need to understand, and the patterns in what you’re getting wrong. It will also help you repeat questions so you can remember the strategies necessary on test day.

-The official GRE books (official guide, quant supplement, verbal supplement)

Optional materials

Manhattan Prep 5 Lb GRE book, for extra quant questions (the official books don’t have enough)

-Strategy guides, for the necessary techniques

-My recommendations: my strategy guides

Your study plan

If you want a detailed 3 month study plan, you can receive ours.

Otherwise, plan for roughly 100 hours of hardcore studying to go up 10-15 points on quant or verbal.

So, if you’re starting at 150V/150Q and want to get to 165V/165Q, plan to spend 4 months spending 20 hours a week studying (to give yourself some wiggle room, if you have some unproductive days).

That’s 2 hours a day on weekdays, 5 hours a day on weekends.

It’s a lot! But packing it all into a few months is the best way to do it. People get discouraged when they spend months working on the GRE, especially when it’s hard to see yourself making improvements week by week. Packing it into a short time prevents that.

How to review the sections

This is both how you should approach the questions, and, more importantly, how to analyze a question you got incorrect.

Review through the error log is key to understanding. If you don’t review your incorrect questions, you’ll never understand them.

Vocabulary: how can we break down the sentence to tell us what goes in the blank, especially key sign posts (like butlikewise, etc.)? Is what we missed simply not knowing the word, or was our comprehension off?

Reading Comprehension: what precise part of the passage did I need to read to get the correct answer?

Critical Reasoning:  how does the argument work (premise, reasoning, conclusion)? how does the correct answer fit into the argument?

Quant: what equations do I need to start with? how do I get from there to the answers?

Data interpretation: where’s the trick in the graph?

When to look for a tutor

You might expect a tutor to say, “Seek out tutoring, all the time, for as many hours as possible, no matter what”. As my Dad says, “Don’t ask the barber when you should get a haircut”.

But, this isn’t the case. Or, at least, it’s not what I recommend.

You should seek out tutoring in two cases:

  1. You took a practice GRE or a real GRE, and it didn’t go the way you expected or wanted
  2. You’ve been studying for a while, and you’re overwhelmed

In either case, you shouldn’t seek out tutoring until you’ve put in some serious effort on your own. It’ll save your wallet, and give you a better idea of what you can get out of tutoring.

In that case, you can start your GRE tutoring journey by emailing me at trevor@trevorkleetutor.com .

How to study for the LSAT

This post shared by Trevor Klee, Tutor, a Boston-based and online LSAT tutor who scored 175 on his LSAT.

1. How to Prepare for the LSAT

a) Start with a diagnostic test. What are your specific strengths and weaknesses? Use 21st Night to discern the patterns, and follow the error log’s patterns for review (don’t skip them)!

b) Do questions to focus on your weaknesses as revealed in the diagnostic test. Really try to understand the process of how to solve questions: you’ll find a lot of examples online. Ask yourself why certain techniques are used, and why your initial instinct may be wrong.

Don’t worry about speed, that comes with being confident and fluent in the techniques. As the old Army saying goes, “Slow is smooth and smooth is fast.” Focus on being smooth in your application of techniques.

c) Once you feel like you’ve covered your initial weaknesses, or you feel confused about about what to do next, take another practice test. Then start with a) again.

d) There are two parts to studying for the LSAT.

One part is like being a marathon runner. You need to put the miles in on the pavement to run a marathon. Anyone can do it, but it takes effort. Doing questions, getting them wrong, and then learning how to do them correctly is the equivalent of putting those miles in. It’s going to suck, but that’s how you learn.

The second part is like being your own coach. You need to reflect on your own progress and what you get wrong and right. What are the patterns in what you get wrong? What techniques do you have difficulty applying?

2. Your LSAT preparation materials

Mandatory

–Khan Academy LSAT Prep

–21st Night

-Error log helps you organize yourself, and show you what questions you still need to do, which questions you need to understand, and the patterns in what you’re getting wrong. It will also help you repeat questions, so you remember the strategies on test day.

-LSATHacks.com for answer explanations (thanks Graeme!)

Optional

-My recommendations: my videos

3. Your LSAT study plan

-Plan for roughly 100-150 hours of hardcore studying to go up 20 points

-So, if you’re starting at 150 and want to get to 170, plan to spend 3-4 months spending 20 hours a week studying (to give yourself some wiggle room)

-That’s 2 hours a day on weekdays, 5 hours a day on weekends

-It’s a lot! But packing it all into a few months is the best way to do it. People get discouraged when they spend months working on the LSAT, especially when it’s hard to see yourself making improvements week by week. Packing it into a short time prevents that.

If you get to the point where you can not just do, but also explain every question in Khan Academy (why the right answer is right and why the wrong answers are wrong), you can get a 165+

4. How to review the LSAT sections

-This is both how you should approach the questions, and, more importantly, how to analyze a question you got incorrect

-Analyzing incorrect questions is more important than doing new ones. Use these questions!

Reading Comprehension:

How did the passage fit together (i.e. why did the author include each paragraph in the section)? What precise part of the passage did I need to read to get the correct answer?

-Logical Reasoning:  

How does the argument’s reasoning lead to its conclusion (or, if it doesn’t, why not)? How does the correct answer fit into the argument’s flow from reasoning to conclusion?

Logic Games:

What’s the model of how the game works (i.e. what would be one correct answer to the game)? How can you minimize the logical steps you need to take to get or eliminate an answer (think of it like golf, and get a low score)?

5. When to get LSAT tutoring

You should get tutoring when

  1. You took an LSAT and it didn’t go well
  2. You feel overwhelmed

You don’t have to start with tutoring!

But, if you do want an LSAT tutor, contact me at trevor [at] trevorkleetutor.com .

How to Study for the GMAT

This post shared by Trevor Klee, Tutor.

1. Your overall process to start preparing for the GMAT

a) Start with a diagnostic test. What are your specific strengths and weaknesses?

Use 21st Night to discern the patterns. Put all questions you got wrong in the error log, and see which sorts of questions you tend to struggle with. The analytics section of the app will help.

b) Do questions to focus on your weaknesses. Really try to understand the process of how to solve questions: you’ll find a lot of examples online. Ask yourself why certain techniques are used, and why your initial instinct may be wrong.

Don’t worry about speed, that comes with being confident and fluent in the techniques. As the old Army saying goes, “Slow is smooth and smooth is fast.” Focus on being smooth in your application of techniques.

c) Once you feel like you’ve covered your initial weaknesses, or you feel confused about about what to do next, take another practice test. Then start with a) again.

d) There are two parts to studying for the GMAT.

One part is like being a marathon runner. You need to put the miles in on the pavement to run a marathon. Anyone can do it, but it takes effort. Doing questions, getting them wrong, and then learning how to do them correctly through the error log is the equivalent of putting those miles in. It’s going to suck, but that’s how you learn.

The second part is like being your own coach. You need to reflect on your own progress and what you get wrong and right. What are the patterns in what you get wrong? What techniques do you have difficulty applying?

2. Your materials to prepare for the GMAT

Mandatory

-Official GMATPrep tests

-Official Guide questions (which are all available on GMATClub.com)

– 21st Night

Optional

-Strategy guides, for the necessary techniques

-My recommendations: my strategy guides, Manhattan’s

3. Your GMAT study plan

-Generally speaking, you need to work 100 hours (intelligently) to improve 100 points on the GMAT. This is, of course, a very rough estimate, and depends heavily on the quality of the hours you put into studying.

-A reasonable way to accomplish this is to plan to work for 20 hours a week for 2 months (giving yourself room for breaks and slow days). That means working 2 hours per day on the weekdays, and 5 hours per day on the weekends.

-Studying for the GMAT should be a sprint. If you plan to spend 6 months, you will get demotivated midway through and lose track of what you’ve learned. Make it a major part of your life for 2–3 months, then be done with it.

-For a specific 60 day study plan, you can check out my email course.

4. How to review the GMAT sections

-This is both how you should approach the questions, and, more importantly, how to analyze a question you got incorrect

-Revision through the error log is the key to learning

Reading Comprehension: what precisely did I need to read to get the correct answer?

Critical Reasoning: how does the argument work (premise, reasoning, conclusion)? How does the correct answer fit into the argument?

Sentence Correction: how does the correct answer correct and efficiently convey the meaning?

Problem Solving: what equations do I need to start with? how do I get from there to the answers?

Data Sufficiency: How do I simplify the prompt? Or, in other words, what’s the prompt really asking for?

When to seek out GMAT tutoring

You might expect a tutor to say, “Seek out tutoring, all the time, for as many hours as possible, no matter what”. As my Dad says, “Don’t ask the barber when you should get a haircut”.

But, this isn’t the case. Or, at least, it’s not what I recommend.

You should seek out tutoring in two cases:

1. You took a practice GMAT or a real GMAT, and it didn’t go the way you expected or wanted

2. You’ve been studying for a while, and you’re overwhelmed

In either case, you shouldn’t seek out tutoring until you’ve put in some serious effort on your own. It’ll save your wallet, and give you a better idea of what you can get out of tutoring.

In that case, you can start your GMAT tutoring journey by emailing me at the address on the top of the page.

Genetically engineering virus-immune bees

This is a post I wrote outside my comfort zone, mainly because I was having serious writer’s block about writing things inside my comfort zone. I think everything I wrote is correct, but who am I to judge?

Because there’s literally nothing else interesting or important going on in the world right now, I thought I’d take a close look at this neat paper on bees called Engineered symbionts activate honey bee immunity and limit pathogens and its supplement

In this paper, they detail how they genetically engineered the gut bacteria of bees to produce double stranded RNA, which they used to cause bees to gain weight, defend themselves against the deformed wing viruses, and kill parasitic Varroa mites. The latter two are the main causes of colony collapse disorder, if you’re familiar. 

This is cool already, but the way they did it was cool, too: they figured out what part of the bee/virus/mite genome they needed to target, bought online the custom-made plasmids (small DNA sequences) to produce the RNA that’d target the necessary parts of the genome, put the plasmids into bee gut bacteria, then put the gut bacteria into the bees.

That’s really cool, right? We now live in an era where you can just like… genetically engineer bees with stuff you order online. Realistically, you could genetically engineer yourself with stuff you order online. You could make yourself resistant to a virus or lactose intolerant (if you really wanted to).

Before we wax too rhapsodic, though, let’s talk a bit about exactly what they did, how they did it, what the limitations are, some unanswered questions/issues, and then, finally, how soon this stuff can be actually ready to be live.

So, to explain what they did, let’s start with their goals. Their goal was to cause RNA interference with other RNA strands in the bees’ bodies. RNA, as you might recall from biology class, is a lot like DNA, in that it contains instructions for other parts of the cell. Bees’ bodies (and our bodies) use RNA to help transmit instructions from DNA to cell machinery, while viruses just keep all their instructions in RNA in the first place.

RNA interference, therefore, means that the instructions are being disrupted. If you disrupt the instructions to reproduce a virus, the virus will not be reproduced. If you disrupt instructions to produce insulin, the insulin will not be produced. One of the ways RNA can be interfered with (and the way that these people specifically interfered with RNA) is by double stranded RNA. 

Double stranded RNA (dsRNA) is what it sounds like: RNA, but double stranded. This is weird because RNA is usually single stranded. When you put targeted double stranded RNA into the body, an enzyme called dicer dices it (great name, right?) into two single strands.

One of these strands will then be complementary (fits like a puzzle piece) to the target RNA, so it’ll latch onto it, serving as a sort of flag to the immune system. Then a protein called argonaute, now that it knows what it’s targeting, comes in and slices the target RNA in two. The target RNA is effectively interfered with.

Now, this is something that happens naturally in the body all the time as part of the immune system. However, the body has to be producing the right double stranded RNA already, so it can flag things correctly (the flags are super specific). What if the body isn’t producing the right dsRNA yet?

Well, if it isn’t, you need to get the dsRNA in there somehow, so the flagging process can start. One option is, of course, just to inject a ton of double stranded RNA into the body. You have to make it all first, of course, and it has a limited shelf-life, but it’s doable. That’s been done before with bees.

This paper took a different tactic. The authors wanted double stranded RNA to be produced inside the bees’ bodies. Bees’ bodies (and all bodies) contain all the machinery to produce any type of RNA you want. That’s how viruses work, actually: they force the body to produce the viral RNA. It’s all just building blocks put together in different orders, after all.

So, in order to get it produced inside the bees body, first they designed a plasmid, which is a circular ring of DNA (DNA can produce RNA). This was the thing that they literally just went to the Internet for. They knew the result they wanted to get (the order of the dsRNA), so then they just went online and ordered a plasmid that would produce dsRNA in a certain order, and they got their plasmid in the mail. That’s amazing, right?

Once they had a plasmid, they “transformed” it into S. Alvi, a gut bacteria that’s very common in bees. This is basically like molecular sewing: you snip open S. Alvi‘s DNA, snip open the plasmid to get a single strand, sew the single strand in S. Alvi‘s DNA, and then let S. Alvi sew itself back together again with the plasmid still inside.

Then, getting it into the bees was relatively easy: they dunked the bees in a solution with sugar water and the bacteria. The bees clean each other off, and then they get infected with the bacteria. Now, the next time S. Alvi‘s DNA gets activated in the bee to do normal gut bacteria stuff, it’ll also produce this dsRNA.

From there pretty much everything else was just testing. They tested where the RNA ended up being produced by including green fluorescent protein in their primer, which is a super common (but still cool) tactic in biology. If you include “make this protein that glows green under UV light” light in your plasmid’s instructions, then wherever your RNA is being produced, there will also be bright green light.

They also tested whether dicer and argonaute were active, to see if the dsRNA was actually doing its thing. Finally, they got into whether they could actually make the experiment work. First, they used one kind of dsRNA to interfere with insulin RNA (i.e. disrupt the production of insulin). They found that insulin production halved (or even quartered) in all areas of the bee body compared to control.

As you’d expect, this has pretty dramatic effects on the bees. The bees who had insulin interfered with were more interested in sugar water, and also gained weight compared to normal bees. I’ve put the weight graph down below, as I think it’s convincing. The sugar water graph I’m also going to put down below, but I’ll discuss it later, because it’s kind of weird.

pDS-GFP is the plasmid that only produces green light. pDS-InR1 is the plasmid that knocks out insulin. As you can see, the bees that were infected with pDS-InR1 started off lighter, on average, than the bees with pDS-GFP, then ended up heavier.


Same deal as above.
So, this is a complicated graph. Essentially, they don’t feed the bees for an hour, then they strap them down and put them next to sugar water. If the bee extends their proboscis, that’s a response. A response rate of 0.25 means 25% of the bees in a treatment group responded. All bees that responded just to water or never responded were kicked out. pDS-INR1 are the bees that had their insulin knocked out; pDS-GFP are the bees that only produce green light; pNR are bees who were infected with empty plasmids. The 0.01 is saying 2 standard deviation significance between the empty plasmid and the insulin one.

Then, they tried for the interesting stuff. They used another kind of dsRNA to interfere with the reproduction of deformed wing virus (DWV) in bees. The combo of deformed wing virus and Varroa mites are super deadly for bees, and are the main direct causes of colony collapse disorder. So, first, they infected honey bees with DWV, then gave them a plasmid to produce dsRNA to interfere with the reproduction. 45% of infected bees survived 10 days later with the plasmid; only 25% of bees survived without it.

Dashed line is when the bees were injected with just buffer solution, solid line is when they were injected with the virus. pDS-DWV2 are the bees that have the plasmid that protects against the virus; pDS-GFP are the bees that have the plasmid that just produces green light; pNR are bees with an empty plasmid. *** means 3 standard deviations significant, NS means not significant.

Next, they tried to use dsRNA to kill mites feeding on bees. This is a little more complicated, because the dsRNA gets produced in the bees, and then the mites feed on the bees and ingest the RNA. That then kills the mites. 50% of mites survived after 10 days when they didn’t feed on bees with the plasmid; only 25% of mites survived after 10 days when they fed on bees with the plasmid.

This is a confusing graph, because they infected the bees with the plasmid, then measured the survival rate of the mites that fed on those bees. pDS-VAR is the plasmid that kills mites. pDS-GFP is the green light, and pNR is just empty plasmid. ** means 2 standard deviations significant.

Overall, this is a pretty cool paper with some cool methods. Unless they really, really screwed up their data, it seems like they definitely found an effective way to protect bees against viruses and mites through genetic engineering. A lot more bees survived and a lot more mites died than the control.

But I want to talk about some of the limitations and problems I have with the study, too.

First of all, one of their claims is that, even though they produced dsRNA only in the gut, it was effective throughout the bees’ entire body. They used this graph to show this.

The orange is the plasmid producing dsRNA with the green fluorescent protein (the GFP marker), and the gray is a plasmid with nothing in it as a control. The y axis is the number of copies of GFP RNA per ng of normal RNA, x axis is days after inoculation. The y axis is a log scale, so each tick is 10 times more than the last one.

You see that, by day 15, there’s like 10,000 copies of GFP RNA in the gut, 100 in the abdomen, and like 10 in the head for the treatment group. There’s also around 10 copies of GFP RNA in the gut and abdomen for the control group, which is presumably some kind of cross contamination or measurement error.

That’s fine, but I’m not a huge fan of 10 copies of GFP being an error in the control group, but a positive signal in the head treatment group. They try to defend it by the head control group having zero GFP, but I’m not sure that’s the right comparison to make.

I think that the rest of the biomarkers in the head bear that out. Almost all of them are NS (not significant), and I’d imagine the one that is significant is just chance.

So, I’m not convinced any of this stuff makes it to the bees’ head. I think it definitely makes it to the bees’ abdomens, but the effect there is still weird. Look at the biomarker graphs below:

I’ve drawn in a bunch of lines just to point out how confusing the biomarker patterns are. For example, the graphs in row B are for dicer, which should go up with dsRNA (the body produces more of it because it has more dsRNA to dice). Note the y axis is the change, rather than an absolute number (annoying choice on their part).

For both the gut and abdomen, it increases from day 5 to day 10 (column a to column b). However, it stays flat in the abdomen in day 15, but still goes up in the gut. This is supposedly one of their significant results, but what gives?  Is there some natural limit in the abdomen that doesn’t exist in the gut?

They buried all these biomarker graphs in the supplement, but I think they’re interesting. At the very least, they complicate the story.

The next issue I have is with the sucrose response graph, which I’ve reproduced below.

If we just had the pDS-InR1 and pNR, I think it’d be a relatively clear story of insulin getting knocked out and bees becoming more sensitive to sugar. But, it’s really confusing what’s going on with the pDS-GFP bees.

It looks like those bees are consistently more responsive to sugar than the pNR bees, even though they should be virtually identical. Why is that? I really wish we could have seen the weight of the pNR bees compared to the other two, so we could have another point of comparison.

The final issue I have is with the bee and mite mortality rate graphs. Below is the bees with virus graph again.

The bees with the protective dsRNA (pDS-DWV) definitely do better than than the bees without (i.e. the bees with just GFP or just the empty plasmid NR) in the treatment group.

But the pDS-DWV bees also definitely do better in the control group, which shouldn’t happen (they’re not being attacked by the virus). The graph says the gap is not significant, and it might be right, but it’s still a big gap. It’s almost the same size as the treatment gap.

I’m also wondering why up to 40% of bees are dying in 10 days in the control group (the orange dashed line). Bees, according to Google, live 122 to 152 days, so they shouldn’t be dying that quickly. I mean, obviously it’s traumatic to get stabbed in the chest with a comparatively giant needle and pumped full of fluid, but, if that’s the case, what effect is that having on the treatment group? How much of the death is from the virus vs. the trauma? Couldn’t they find a better way of infecting them?

I also wonder about the Varroa mite graph, which I’ve reproduced again below. In the graph, 50% of mites die after 10 days in the control group. According to Google, mites live for 2 months. Why are so many of them dying after 10 days?

I’d like to see a control group of mites feeding on bees in “the wild” (i.e. in a normal beehive), to see what the normal survival rate should be.

So, I think these are some strong results, but they’re complicated. 

I don’t think the story of how dsRNA moves around the body is super clear, and I think it probably doesn’t make it to the head at all, contrary to what the paper claims. 

I think that either insulin is more complicated in bees than this paper assumes (i.e. it doesn’t have such a clear relationship to propensity towards sugar water), or there was something wrong with the GFP bees.

Finally, I think this method does protect against mites and deformed wing virus, but how much it does is complicated by the fact that the bees and mites died a lot regardless of what was done.

Final question: how close is this to production?

Well, barring legal issues, I think this could actually be close. Here are the two big issues standing in the way:

First, it seems like each bee has to be individually treated. The study authors actually tried to see if bees could infect each other with the genetically engineered bacteria, but it only worked on 4/12 newly emerged workers (which is obviously not a large sample size). They’d need to figure out some way to encourage bees to infect each other more, or beekeepers would have to individually treat each bee (which is labor intensive and probably traumatizing for the bees).

The other issue would be with regards to creating the plasmids, putting it into bacteria, then infecting the bees with the bacteria. That’s all expensive and labor intensive, and definitely not the sort of thing that beekeepers would want to do themselves. I’m actually very curious about the total cost of the plasmids for this experiment, and how much that would increase given the number of plasmids you’d need for all the bacteria.

Of course, the ultimate dream would be to do this to humans. Given that there are some viruses (which shall not be named) which currently don’t have effective treatments, the possibility of simply injecting ourselves with gut bacteria of our own transformed to produce dsRNA is really attractive.

I think there are some serious issues with that though. Humans have a really complex immune system compared to bees, and I’m not sure how our immune system would react to a bunch of random snippets of RNA floating around our blood stream. As a last ditch effort in a severe case though… might be interesting. I’ll explore that with my next post.

Self-organized criticality: the potential and problems of a theory of everything

Note: this essay is outside of my comfort zone, so there might be a few mistakes. I relied a lot on this paper and Wikipedia to help me think about it. Mistakes are my own.

The 1987 paper “Self-organized criticality: An explanation of the 1/f noise”, by Bak, Tang, and Wiesenfeld has 8612 citations. That is an astonishingly high number for a paper that presents a model for statistical mechanics. Even more astonishing is the range of papers that cite it. Just in 2020, it’s been cited by a paper on brain activity as related to genesa paper on the “serrated flow dynamic” of metallic glass, and a paper on producing maps of Australian gold veins.

It is an incredibly influential paper on a huge variety of subjects. I mean, I doubt the scientists who wrote those papers have a single other citation in common in their whole research history. How did they all end up citing this one paper? What’s been the effect on science of having this singular paper reach across such a wide range of subjects?


These are the topics that I want to explore in this paper. Before I can, though, we have to start by explaining what the paper is and what it tries to be.

Self-organized criticality. or SOC, is a concept coming out of complexity science. Complexity science is generally the study of complex systems, which covers a really broad range of fields. You might remember it as the stuff that Jeff Goldblum was muttering about in Jurassic Park. When you get a system with a lot of interacting parts, you can get some very surprising behavior coming out depending on the inputs. 

Bak, Tang, and Wiesenfeld, or BTW, were physicists. They knew that there were some interesting properties of complex systems, namely that they often displayed some similar signals. 

For one, if you measure the activity of complex systems over time, you often see a 1/f signal, or “pink noise”. For instance, the “flicker noise” of electronics is pink noise, as is the pattern of the tides and the rhythms of heart beats (when you graph them in terms of frequency).

From https://www.edn.com/1-f-noise-the-flickering-candle/ . Notice how the baseline of 1/f wanders? The basic reason is because it’s from a complex system with complex inputs.

For another, if you measure the structure of complex systems over space, you often see fractals. They’re present in both Romanesco broccoli and snowflakes. 

Fractal Broccoli.jpg
This is a gorgeous image of Romanesco broccoli from Wikipedia. It naturally approximates a fractal.

BTW proposed that these two are intimately related to each other, which had been suggested by others before. However, the way they proposed was that both can come from the same source [1]. In other words, 1/f noise and fractals can be caused by the same thing: criticality.

Criticality distinctions

Criticality is a phenomenon that occurs in phase transitions, where a system rapidly changes to have completely different properties. The best studied example is with water. Normally, if you heat water, it’ll go from solid, to liquid, to gas. If you pressurise water, it’ll go from liquid to solid (this is really hard and requires a lot of pressure). 

However, if you both heat and pressurize water to 647 K and 22 MPa (or 700 degrees Fahrenheit and 218 times atmospheric pressure), it reaches a critical point. Water gets weird. At the critical point (and in the vicinity of it), water is compressible, expandable, and doesn’t like dissolving electrolytes (like salt). If you keep heating water past that, it becomes supercritical, which is a whole different thing.

A nicely labeled phase transition diagram from Wikipedia. Note that criticality is within the very close vicinity of that red dot.

So, there are two important things about criticality. First, the system really rapidly changes characteristics within a very close vicinity to the parameters. Water becomes something totally unlike what it was before (water/gas) or after (supercritical fluid). Second, the parameters have to be very finely tuned in order to see this. If the temperature or pressure is off by a bit, the criticality disappears.

So what does that have to do with 1/f noise and fractals? Well, because systems are scale invariant at their critical point. That means that, if you graph the critical parameters (i.e. the things that allow the system to correlate and form a coherent system, like electromagnetic forces for water), you should always see a similar graph, no matter what scale you’re using (nano, micro, giga, etc.). This is different from systems at their non-critical point, which usually are a mess of interactions that change depending on where you zoom in, like the electromagnetic attractions shifting among hydrogen and oxygen molecules in water.

Image result for scale invariance
A mesmerizing display of scale invariance from Stack Exchange. No matter how much we zoom in, the graph remains the same.

This is suggestive of fractals and 1/f noise, which are also scale invariant. So, maybe there can be a connection that can be explored. 

Before BTW could make that connection stronger, though, they needed to fix the second important thing about criticality: the finely tuned parameters. 1/f noise and fractals are everywhere in nature, so they can’t come from something that’s finely tuned. To go back to water, you’re never going to see water just held at 647 K, 22 MPa for an extended period of time outside of a lab. 

This is where BTW made their big step. What if, they asked, systems didn’t have to be tuned to the parameters? What if they tuned themselves? Or, to use their terminology, what if they self-organized?

Now, for our water example, this is clearly out of the question. Water isn’t going to heat itself. However, not every phase transition has to be solid-liquid-gas. It just needs to involve separate phases that are organized and can transition (roughly). Wikipedia lists 17 examples of phase transitions. All of these have critical points. Some of them have more than one. 

So BTW just needed to find phase transitions that could be self-organized. And they kind of, sort of, did. They created a model of a phase transition that could self-organize (ish) to criticality. This was the sandpile model.

It goes like this: imagine dropping grains of sand on a chessboard on a table. When the grains of sand get too high, they topple over, spilling over into the other squares. If they topple over on the edge, they fall off the edge of the table. If we do this enough, we end up with an uneven, unstable pile of sand on our chessboard.

If we then start dropping grains randomly, we’ll see something interesting. We see a range of avalanches. Most of the time, one sand grain will cause a small avalanche, as it only destablizes a few grains. Sometimes, that small avalanche causes a massive destabilization, and we get a huge avalanche.

What BTW consider important about this model is that the sandpile is in a specific phase, namely heaped on a chessboard. This phase is highly sensitive to perturbations, as a single grain of sand in a specific spot can cause a massive shift in the configuration of the piles.

If you graph the likelihood of a massive avalanche vs. a tiny one, you get a power law correlation, which is scale invariant. And, most importantly, once you set up your initial conditions, you don’t need to tune anything to get your avalanches. The sandpiles self-organize to become critical.

Image result for self organized criticality graph
This graph from the original paper is on a log-log scale, so a straight line means that there’s a power-law correlation. The fall-off at the end is dictated by the size of the chessboard.

So, with this admittedly artificial system that ignores physical constraints (i.e. friction), we get something that looks like criticality that can organize itself. From there, we can try to get to fractals and 1/f noise, but still in artificial systems. Neat! So how does that translate to 8600 citations across an incredibly broad range of subjects?

Getting to an avalanche of citations for SOC


Well, because BTW (especially Bak, the B), weren’t just going to let that sit where it was. They started pushing SOC hard as a potential explanation for anytime you saw both fractals and 1/f noise together, or even a suggestion of one or both of them.

As long as you had a slow buildup (like grains of sands dropping onto a pile), rapid relaxation (like the sand avalanche), power laws (big avalanches and small ones), and long range correlation (the possibility of a grain of sand in one pile causing an avalanche in a pile far away), they thought SOC was a viable explanation.

One of the earliest successful evangelizing attempts was in an attempt to explain earthquakes. It’s been known since Richter that earthquakes follow a power law distribution (small earthquakes are 10-ish times more likely to happen than 10-ish times larger earthquakes). In 1987, around the same time of SOC, it became known that the spatial distributions and the fault systems of earthquakes are fractal.

Image result for richter law
Gutenberg-Richter law plotted with actual earthquakes, number of earthquakes vs magnitude. The magnitude on the bottom is log scale (the Richter scale), so we get a clear power law correlation. From Research Gate.

From there, it wasn’t so far to say that the slow buildup of tension in the earth’s crust, the rapid relaxation of an earthquake, and the long range correlation of seismic waves meant SOC. So Bak created a model that produced 1/f noise in the time gap between large earthquakes, and that was that! (Note: if this seems a little questionable to you, especially the 1/f noise bit, read on for the parts about problems with SOC).

Next: the floodgates were open. Anything with a power law distribution was open for a swing. Price fluctuations in the stock market follow a power law, and they have slow-ish build-up and can have rapid relaxation. Might as well. Forest fire size follows a power law, and there’s a slow buildup of trees growing then a rapid relaxation of fires. Sure! Punctuated equilibrium means that there’s a slow buildup of evolution, and then a rapid evolutionary change (I guess). Why not?

Bak created a movement. He was very deliberate about it, too. If you look at the citations, both the earthquake paper and the punctuated equilibrium paper were co-written by Bak. They were then cited and carried forward by other people in those specific fields, but he started the avalanche (if you’ll forgive the pun).

And he didn’t stop with scientific papers, either. He spread the idea to the public as well, with his book How Nature Works: the science of self-organized criticality. First of all, hell of a title, no? Second of all, he was actually pretty successful with this. His book, published in 1996, currently has 230 ratings on Goodreads. For a wonky book from 1996, that’s pretty amazing!

That, in a nutshell, is how the paper got to 8600 citations across such a broad range of fields. It started off as a good, interesting idea with maybe some problems. It could serve as an analogy and possibly an explanation for a lot of things. Then it got evangelized really, really fervently until it ended up being an explanation for everything.

But what about science?

This brings us back to our second question, which is: what have the effects on science been of this? Well, of course, good and bad.

Let’s discuss the good first, because that’s easier. The good has been when SOC has proved to be an interesting explanation of phenomena that didn’t have great explanations before. For example, this paper, which I relied on heavily in general for this essay, discusses how SOC has apparently been a good paradigm for plasma instabilities that did not have a good paradigm before.

Now, I completely lack any knowledge of plasma instabilities, so I’ll have to take their word for it, but it seems unlikely that the plasma instability community would know of SOC without Per Bak’s ceaseless evangelism.

The bad is more interesting. Any scientific theory of everything is always going to have gaps. However, most of them never have any validity in the first place. Think of astrology, Leibniz’s monads, or Aristotle’s essentialism: they started off poorly and were evangelized by people who didn’t really understand any science in the first place.

SOC is more nuanced. It had and has some real validity and usefulness. Most of the people who evangelized and cited it were intelligent, honest people of science. However, Bak’s enthusiastic evangelism meant that it was pushed way harder than the average theory. As it was pushed, it revealed problems not just with how SOC was applied, but with a lot of the way theory is argued in general.

The first and most obvious problem was with the use of biased models. This is always a tough problem, because not everything can be put in a lab or even observed. There is always a tension between when a model is good enough, and what things are ok to put in a model or leave out. But Bak and his disciples clearly created models that were designed to display SOC first of all, and only model the underlying behavior secondarily.

Bak’s model of punctuated equilibrium is a particularly egregious example. Bak chose to model entire species (rather than individuals), chose to model them only interacting with each other (ignoring anything else), and modeled a fitness landscape (which is itself a model) on a straight line. In more straightforward terms, his model of evolution is dots on a line. Each dot is assigned a number. When the numbers randomly change, they are allowed to change the dots around them too with some correlation.

Image result for one dimensional fitness landscape
Something like this, with the height of the dots being the numbers he assigned. Seriously. That’s his model of evolution. From here.

This is way, way too far from the underlying reality of individuals undergoing selection. It makes zero sense, and was clearly constructed just to show SOC. Somehow, though, it got 1800 citations.

However, I feel less confident criticizing Bak’s model of earthquakes. In this, he models earthquakes as a two dimensional array of particles. When a force is applied to one particle, it’s also applied to its neighbors. Now, obviously earthquakes are 3-dimensional, and there is a wave component to them that’s not well-represented here, but this seems like an ok place to start.

Maybe it’s not though. Maybe we should really start with three dimensions, and model everything we know about earthquakes before we call an earthquake model useful. Or maybe we should go one step further, and say an earthquake model is only useful once it’s able to make verifiable predictions. Newton’s models of the planets could predict their orbits, after all.

Image result for epicycles
Then again, the incredibly complicated epicycle model could also predict the movements of the planets, to a point. Prediction can’t be the end all and be all. This image from Wikipedia is from Cassini’s highly developed epicycle model.

A purist might hold that models aren’t useful until they’re predictive, but that’s a tough stance for people actually working in science. They have to publish something, and waiting until your model can make verifiable predictions means that people won’t really be communicating any results at all. Practically speaking, where do we draw the line? Should we draw the line at any model which is created to demonstrate a theory, but allow any “good faith” model, no matter how simplistic?

A different sort of issue comes up with SOC’s motte-and-bailey problem. Bak, in his book How Nature Works, proposed SOC for lots of things that it doesn’t remotely apply to. Punctuated equilibrium was just one example. When he was pressed on it, he’d defend it by going back to the examples that SOC was pretty good on.

Image result for motte and bailey logical fallacy
The motte and bailey style of argumentation, from A Moment With Mumma. I had to find this one, because the top result had text from arguments about communism. Oh, the Internet.

It’s not a problem to propose a theory to apply to new situations, of course. However, so many theorists rely on the validity of a theory for a limited example to justify it for a much broader application, rather than defending it for the broader application.

On one level, that’s just induction: recognizing the pattern. However, it’d seem that there should be much more effort put into establishing that there is a pattern, and then justifying the new application as soon as possible.

This ties into the next problem: confusing necessary and sufficient assumptions. In the initial paper, BTW were pretty careful about their claim. SOC was sufficient, but not necessary, to explain the emergence of fractals and 1/f noise. It was necessary, but not sufficient, to have power law distribution, long range correlations, slow buildup with rapid relaxations, and scale invariance to have SOC [2].

Image result for square rectangle
Sufficient vs. necessary: before you can identify something as a square, it is necessary for it to have 4 sides, but it’s not sufficient (it could be a rectangle). It is sufficient and necessary for a square to have 4 sides of equal length and 4 90 degree angles. Image from Quora.

When Bak was hunting for more things to apply SOC to, he got sloppy. He would come close to making claims like fractals and 1/f noise implied SOC, or power laws implied SOC. Now, this is maybe ok at a preliminary part of the hunt. If you’re looking to find more applications of SOC, you have to look somewhere, and anything involving fractals, power laws, or the like is an ok place to start looking. But you can’t make that implication in your final paper.

Not only does this make your paper bad, but it poisons the papers that cite it, too. This is exactly what’s happened with some of the stranger papers that have cited BTW, which is another reason for its popularity besides Bak’s ceaseless evangelism and its validity for limited cases. SOC got involved in neurology through this paper, which uses a power law in neuronal avalanches to justify the existence of criticality. In other words, it says a power law is sufficient to assume criticality, and then goes from there to create a model which will justify self-organization.

But that’s backwards! Power laws are necessary for criticality; they aren’t sufficient. Power laws show up literally everywhere, including in the laws of motion, the Stefan-Boltzmann equation for black body radiation, and the growth rate of bacteria. None of those things are remotely related to criticality, so they obviously can’t imply criticality. The paper, which is cited 454 times (!), is based on a misunderstanding.

SOC is actually kind of a unique case, scientifically, because it did lay out its necessary and sufficient hypotheses so clearly. That’s why I can point out the mistake in this paper. However, many more less ambitious scientific hypotheses aren’t nearly so clear. For example, here’s the hypothesis of the neurology SOC paper, copy pasted from the abstract: Here, we demonstrate analytically and numerically that by assuming (biologically more realistic) dynamical synapses in a spiking neural network, the neuronal avalanches turn from an exceptional phenomenon into a typical and robust self-organized critical behaviour, if the total resources of neurotransmitter are sufficiently large.

The language is a bit dense, but you should be able to see that it’s unclear if they think SOC is sufficient and necessary for neuronal avalanches (you have to have it), or just sufficient (it’s good enough to have it). In fact, I’d wager that they wouldn’t even bother to argue the difference.

It’s only because SOC is such an ambitious theory and Bak tried to apply it to so many things that he was forced to be so clear about necessary vs. sufficient. Way, way too often in scientific papers suggestive correlations are presented, and then the author handwaves what exactly those correlations mean. If you present a causal effect, is that the only way the causal effect can occur? Are there any other ways?

The weighing

So, in conclusion, the bad parts of SOC’s incredible wide-ranging influence are a lot of the bad parts of scientific research as a whole. Scientists are incentivized professionally to publish papers explaining things. That’s one of the big purposes of scientific papers. They are not particularly incentivized to be careful with their explanations, as long as they can pass a sniff test by their peers and journal editors.

This means that scientists end up overreaching and papering over gaps constantly. They develop biased models, over-rely on induction without justification, and confuse or ignore sufficient and necessary. 

BTW’s impact, in the end, was big and complicated. They created an interesting theory which had an enormous impact on an incredible variety of scientific fields in a way that very few other theories ever have. On most of the fields, SOC was probably not the right fit, although it may have drove the conversation forward. On a few of the fields, it was a good fit, and helped explain some hitherto unexplained phenomena. On all of the fields, it introduced new terms and techniques that practitioners were almost certainly unfamiliar with.

It’ll be interesting to see when the next theory of everything comes about. Deep learning and machine learning are turning into a technique of everything, which comes with problems of its own. Who knows?

Footnotes

1. This is where some of the problems and the ubiquity of SOC come from. Bak, in particular, has come very close to suggesting they always come from the same source, which is way more indefensible than they can come from the same source. See the motte and bailey discussion further on.

2. Quick primer on sufficient and necessary: Sufficient means that if you have SOC, you are guaranteed to have fractals and 1/f noise, but you don’t need to have SOC to have those. Necessary means you needed power laws, etc. for SOC, but you might also need more things too for SOC

How to fix how people learn calculus: make calculus exciting again

Most people who take a calculus course never really learn calculus. They have only a hazy grasp of how the pieces fit together. Sure, they might be able to tell you that the derivative of x^2 is 2x, but ask them why and you’ll get a blank look. They learned to mask their confusion with shortcuts, and their teachers never really checked to see if there was anything deeper.

This is a pity, and a mark of how we fail to teach calculus correctly. If students really learned calculus, they wouldn’t find it confusing. They’d find it shocking, instead. Calculus is unlike arithmetic, algebra, or geometry, and learning calculus is learning a whole new way to think about math.

Arithmetic can be understood by simply counting on your fingers. Geometry can be understood by drawing shapes in the sand (or on the blackboard). Basic algebra can be understood by replacing the numbers in our arithmetic with variables. All of these build off of real-world analogues.

Calculus is different. Calculus has paradoxes at its core, and understanding calculus means coming to grips with these paradoxes in a way that doesn’t have a real world analogue. This is a tall order. In fact, it’s so tall it took literally 2000 years to do so.

When Democritus and Archimedes first approached the integration part of calculus through geometry, they recognized the usefulness of it quickly. Have an irregular shape or parabola that you need to calculate the area of? You just divide it up into infinitely small triangles, and “exhaust” the area. It actually works pretty well.

Illustrates Archimedes’ method of exhaustion for finding the area of a region under a parabola.
From AMSI. Note that the smaller the triangles get, the closer they get to approximating the area, but there are still gaps.

But what does infinitely small mean? The Greeks couldn’t figure it out. If they’re triangles, they presumably have an area. If they have an area, putting a bunch of them together should add up to some huge number, even if each one individually is small. If we say instead that they don’t have an area, putting a bunch together should add up to an area of zero. But somehow, calculus is supposed to tell us that putting an infinite number of triangles together adds up to a finite area.

Illustration of Zeno's dichotomy paradox
It’s related to one of Zeno’s paradoxes: in order for the guy to reach the end of the race, he has to cross to the halfway mark. Once he gets there, he has to cross to the halfway mark of what’s left (3/4). And halfway again, and so forth. He’s traveling an infinite number of increasingly small distances, so you’d expect for him to never get there. Yet somehow, he does. How?

This should be shocking. Our understanding of (Euclidean) geometry is almost entirely based on what we can draw. Integrative calculus looks like something we can draw on paper, but, when we try to, we end up with something that doesn’t really make sense. This very much disturbed the Greeks (and really disturbed the Jesuits as they followed in the Greek footsteps, to the point that the Jesuits declared “infinitesimals” heretical). What kind of geometry made less sense when you drew it out?

Understanding calculus from our other main method of understanding math, algebra, was even less fruitful at first. The Islamic Golden Age mathematicians, like Sharaf al-Din al-Tusi, experimented a lot with solving polynomials, and eventually realized they could find the maxima of certain functions by limits (note: that link goes to a clever recast of al-Tusi’s work into a modern day calculus word problem). But the use of that sort of “pure” algebra stops there. Without strict definitions of functions or limits, it’s hard to recognize a problem like “finding the maximum of a cubic polynomial” as what it is, which is finding the derivative of a function.

We had to literally extend algebra before we can get to derivative calculus from the other way. We needed functions, and the Cartesian coordinate plane, the latter of which was literally invented by Descartes and his academic descendants to help understand calculus (a fact which surprised me while researching this, given the standard math curriculum). If we understand functions and we have the coordinate plane, we can plot functions onto the coordinate plane. Then we can think about dividing up a parabola into line segments, and examining the slope of those line segments can get us thinking about how we might predict the way the slope changes over the course of a curve.

From this random PowerPoint presentation. Where the tangent line intersects the curve is, of course, the slope of the curve at that spot. The title of this slide suggests one of the main reasons that mathematicians became interested in finally formalizing derivatives: to relate acceleration, velocity, and position. Newton, of course, was the most famous, but Descartes and Galileo before him made huge amounts of progress on the same issue. Leibniz, interestingly, came to calculus on purely theoretical concerns, like the Arabs before him.

This is a useful trick. We can get to maxima and minima by looking at when the slope will reach zero. And, we’re at the point where we can be shocked again. It’s another paradox!

A smooth parabola can’t really be made of line segments. It’d end up being choppy. So the line segments would have to be infinitely small, and then we get the same issue as before. A parabola of definite length being made up of an infinite number of infinitely small line segments seems like a contradiction. Either they have a length, in which case the parabola is choppy, or they don’t, in which case they shouldn’t be able to “make up” anything.

Here we have a choppy parabola. Each line segment is straight (i.e. has a definite slope) and obviously has an actual length, but they need gaps in between them in order to make the suggestion of a parabola. If we tried to connect them with straight lines, they’d run past each other. From mathworks.

So we’ve got paradoxes on either side of calculus. If we try to understand calculus by our old geometry, we’ve got a paradox of infinitely small triangles. If we try to understand calculus by our old algebra, we’ve got a paradox of infinitely small line segments. And yet both seem to work well for limited cases. How?

Ironically, pointing out these paradoxes of infinity can clue us into the greatest shock of all, which is that integrals and derivatives are two sides of the same coin. This is when calculus gets really surprising. The flip side of the rate of change of a parabola is the area under a parabola. 

Every time we add a bit of area (the red stripe), we are adding about the same as the function multiplied by the amount forward. The smaller the amount forward, the more exact the equality. This can only work with a strict definition of functions, though, which is what the Islamic mathematicians and the Greeks were lacking. From Wikipedia.

There is literally nothing in geometry and algebra so shocking as this. It’s so shocking that it took humanity 2000 years, from Democritus to Barrow (the first man to come up with a geometric proof of this) to realize this. Not only are geometry and algebra fundamentally related, but they’re related through something that’s both paradoxical and entirely physical (just think of the relation between total velocity and amount moved).

When students learn this, they should be like, “Holy shit, math! You mean that this entire time there’s been a deep relationship between two subjects I’ve been holding separate in my head? And now, armed with this new knowledge, I can go out and solve real world issues! That’s amazing!”

But, unfortunately, they’re not like that. I mean, I don’t know if any teenager would be caught dead being that enthusiastic about anything, but teachers don’t even attempt to make students that enthusiastic.

It’s not the teacher’s fault, either. The fault is in the curriculum. In this McGraw Hill textbook (which I think I used in my own AP Calculus class), this fundamental theorem of calculus is taught in chapter 4.5, sandwiched in between “The Definite Integral” and “Integration by Substitution”. So, students are taught how to do a derivative, how to do an integral, then “by the way these two seemingly unrelated topics are actually deeply related”, then “and also here’s another way to do integrals”.

Students react logically to this. They’re not shocked by the fundamental theorem of calculus, they’re confused. If it’s fundamental, why would it be stuck in the middle of another section? If they’re being taught both derivatives and integrals already, isn’t it obvious that they’re related as being part of calculus? It’s just another opportunity to zone out in math class.

That’s certainly how I remember feeling about it, and I was one of the best students in my high school math class. I didn’t care. I was too busy memorizing limits, differentiation, and integration tricks. When test day came, I had a bunch of formulae and equations in my head that I plugged in appropriately, and I scored the highest mark possible on both my AP Calculus exams.

I was so far into this mindset of “memorize techniques to score well on exams” that I don’t think a single lesson would have done anything, to be honest. To have really learned calculus well enough, I think I’d need to have been taught to appreciate the practical concerns that drove the development of calculus, as well as the understand the theoretical underpinnings of calculus.

If I were to teach a calculus class myself, those would actually be the main things I’d focus on. Memorization of formulae and techniques should be a small part of a calculus class, not the majority of it. Sure, it can make parts of calculus easier, but overreliance on it gives a “Chinese room” understanding of calculus. The student sees a problem and is able to put out the correct answer, but doesn’t really understand why the answer is correct. More importantly, if they see a similar problem formulated differently, they’re unable to solve it.

To handle the motivation portion, I’d start by introducing the practical concerns that drove the development of calculus. This could literally be a lab portion of the class. Bring out trebuchets for derivatives or make students try to fill a warehouse for integrals. Show them why people ever cared to calculate these values. But, most importantly, have them try to solve the practical problems first with just geometry and algebra, so they can appreciate the usefulness of calculus in the same way their academic ancestors did. These labs should motivate learning calculus, not illustrate it.

The theoretical underpinnings of calculus would, admittedly, be trickier. Having taught a lot of adults math, I am confident in saying that most students have an ok understanding of geometry and a poor understanding of algebra. This is because calculus is not the only math course that’s structured badly. Precalculus is, I dare say, even worse.

For those who aren’t familiar, precalculus is the American educational system’s way of bridging the gap between algebra and calculus. Instead of focusing on a deeper understanding of algebraic proofs and fundamentals, though, it’s a weird grab bag of introductions to some math that students are probably unfamiliar. So, a precalculus course introduces functions, polynomials, exponents, logarithms, trigonometry, and polar coordinates, in sequence, one after the other. And because the course is explicitly not about teaching calculus, the only clue that students get as to why they’re taught this is “you’ll need it later”.

Here’s the table of contents of a precalculus textbook from McGrawHill. Again, this is pretty similar to the one I used. Imagine moving from polynomials, to logarithms, to trigonometry in the space of 3 chapters. I literally do not think I could imagine 3 mathematical topics that have less in common.

What kind of student could learn 6 disparate pieces of math, one after another, in a mostly disconnected fashion, then start the next year and apply them all to calculus? Well, in my experience, pretty much none of them. They fail to learn functions, get confused why they’re learning the rest of it, and then start calculus having forgotten algebra but still not understanding precalculus.

A calculus course, then, has to take into account that a lot of students won’t have the right background for it. In that case, I’d say the entire first semester or so should be dedicated to a proper theoretical background for calculus: finding areas with geometry, functions, algebraic proofs, and at last the Cartesian coordinate plane to unite algebra and geometry. This would provide a clear theoretical transition into calculus (and theoretical motivation for calculus with proper foreshadowing). 

Then derivatives and integrals can be covered in the second semester, with add-ons like exponents and logarithms, polar coordinates, and infinite series saved for either a follow-up course or for advanced students. The actual calculus section of the course would likely be similar to this excellent MIT textbook, actually.

A calculus course taught this way would, hopefully, make deep sense to the students. They’d begin with developing the background to calculus, ending the first semester with the same background that Newton and Leibniz had when they developed calculus.

Then the second semester could provide that shocking, “Aha!” moment. Students would not just get the knowledge of Newton and Leibniz, but get some small sense of what it must have been like to be them as they made their groundbreaking discoveries.

It took me a long time to appreciate math, and calculus longest of all. I only realized in college that I had been cheated out of a deep understanding of math and given a shallow collection of tricks instead. The education system focuses so much on the utility of math, even when it’s a reach (see the derivatives of trigonometric functions). It should focus on the beauty, the shock, and the awe instead.

Why most intro philosophy courses feel useless and how to fix them

Introduction to philosophy tends to be a useless class. At its best, it tends to feel like a drier version of the stuff you argue about with your friends while high. At its worst, it feels like listening to high people argue while you’re sober. Neither one makes you feel like you’ve accomplished that much more than high talk.

These problems are structural. It’s not just how the classes are taught, but what’s taught in the classes. For instance, take a look at syllabus to this Coursera course, which actually receives great reviews.

Syllabus to Introduction to Philosophy

  • What is Philosophy?
  • Morality: Objective, Relative or Emotive?
  • What is Knowledge? And Do We Have Any?
  • Do We Have an Obligation to Obey the Law?
  • Should You Believe What You Hear?
  • Minds, Brains and Computers
  • Are Scientific Theories True?
  • Do We Have Free Will and Does It Matter?
  • Time Travel and Philosophy

Judging by the reviews (4.6 stars from 3,941 ratings!) , this is probably a fun class. But this class, without a doubt, is pretty useless.

How do I know that? Well, because literally zero of the questions are answered. It says so in the syllabus. For example, this is how they discuss “Morality: Objective, Relative or Emotive?”:

We all live with some sense of what is good or bad, some feelings about which ways of conducting ourselves are better or worse. But what is the status of these moral beliefs, senses, or feelings? Should we think of them as reflecting hard, objective facts about our world, of the sort that scientists could uncover and study? Or should we think of moral judgements as mere expressions of personal or cultural preferences? In this module we’ll survey some of the different options that are available when we’re thinking about these issues, and the problems and prospects for each.

It’s both sides-ism. It literally guarantees you that you won’t actually get an answer to the question. The best you can get is “options”. This both sides-ism is even worse for those questions that obviously have right answers. Yes, we have knowledge. Yes, we have an obligation to obey the law most of the time. Yes, most scientific theories are true.

Now, it’s possible to cleverly argue these topics using arcane definitions to make a surprisingly compelling case for the other side. That can be fun for a bit. But introduction to philosophy should be about providing clear answers, not confusing options.

Let me make my point with an analogy. Imagine going into an introductory astronomy course, knowing very little about astronomy besides common knowledge. The topic of the first lesson: “Does the Earth revolve around the sun?” The professor then would present compelling arguments for whether or not the Earth revolves around the sun, without concluding for either side. 

If the class was taught well, a student’s takeaway might be something like, “There are very compelling arguments for both sides of whether or not the Earth revolves around the sun.” The student would probably still assume that the Earth revolved around the sun, but assume that knowledge was on a shaky foundation.

This would make for a bad astronomy class, which is why it’s not done. But this is done all the time in philosophy. In fact, most of the readers of this essay, if they only have a surface level impression of philosophy, probably assume philosophy is about continually arguing questions without ever coming to conclusions.

That’s not what philosophy is. At least, that’s not what most of it is. Philosophy is, in fact, the foundation in how to think and how to evaluate questions. Every single academic subject started or was heavily influenced by philosophy, and philosophy can still contribute to all of them.

Making philosophy feel useful again

Introduction to philosophy, properly taught, should be like teaching grammar. Students can think and analyze without philosophy, just like they can speak and write without knowing the formal rules of grammar. But philosophy should provide them with the rigorous framework to think more precisely and, in turn, analyze subjects and ideas in a way that they could not before. Philosophy should change the way a student thinks as irrevocably as knowing when exactly to use a comma changes the way a student writes.

In order for that to be the case, though, introduction to philosophy has to be a different class. It has to be a class with clear answers and clear takeaways, rather than a class with fun questions and arcane discussions. It has to be explicitly didactic: philosophy has things to teach, not just things to discuss.

If I were to design a philosophy course, that’s what I’d do. I’d make a course that assumed no background in philosophy, took a student through the most important, life-changing ideas in philosophy, and gave clear, actionable ways to change how a student should think and live. At the end of the course, I’d want a student to feel confident taking everything I taught out of the classroom to affect the rest of their lives.

And you know what? That’s exactly what I did when I made my own introduction to philosophy course.

Let me give some background. Although I’ve always loved self-studying philosophy, I only took two philosophy courses in college, and I got a B in one and a B+ in the other. The first featured a large, soft man who droned about theory of knowledge while covered in chalk dust. The second featured a hyperactive woman who attempted to engage the class in discussion about our ethical intuitions, while I attempted to engage my intuitions about Flash games (laptops are always a danger in boring classes). 

After college, however, a happenstance meeting got me a job teaching a massive online philosophy course to a Chinese audience. This was a difficult job: I was teaching philosophy in these students’ second language, these students were paying several hundred dollars for the course, and they had no reason to be there besides their own interest (not even grades). The students could drop my class at any time and get a refund.

In fact, because I got realtime viewership numbers, I could literally see people drop my class anytime I ventured into a boring subject. It was terrifying and exhilarating. I lasted 1.5 years in this company (until they switched to Chinese language instruction only), taught around 5000 students total, and went through 3 complete overhauls of my syllabus.

 By the time of my last overhaul, I had decided that the guiding principle of my class would be as I wrote above: a backbone to the rest of your intellectual life. Specifically, I had 3 sections to my course: how to think, how to live, and how things should work.

My own course: how to think, how to live, and how things should work

I chose those 3 sections because I felt like those were the most important things that philosophy had to offer, and what had the greatest impact on my own life.

“How to think” directly related to both the academic subjects that my students (most of whom were in college or right out of college) were familiar with, and the later sections of my course. As I described it to my students, the takeaways from “how to think” served as a toolbox. 

The power and the limitations of deduction, induction, and abduction apply to everything built on them, which basically encompasses all academic knowledge. It’s like starting a boxing class with how to throw a punch: a pretty reasonable place to start.

“How to live” was what it sounded like: how to live. I didn’t want to simply call it ethics, as I wanted to make it clear to my students that they should take these lessons out of the classroom. After I described the ethical philosophies to my students, we evaluated them both logically, using the tools from “how to think”, and emotionally, seeing if they resonated.

 If the ethical philosophy was both logical and emotionally resonant, I told my students to be open to changing how they lived. All philosophy should be able to be taken out of the classroom, especially something so near to life as ethics. I’m just as horrified by a professor of ethics who goes home and behaves unethically as I would be by a professor of virology who goes home and write anti-vax screeds.

Finally, “how things should work” was my brief crash course in political philosophy. Political philosophy is a bit hard to teach because the parts that are actually practicable tend to be sequestered off into political science. It’s also hard to teach because, frankly, college students don’t have a lot of political pull anywhere, and China least of all.

So, instead, I taught my students political philosophy in the hopes that they could take it out of the classroom one day. As we discussed, even if their ultimate position is only that of a midlevel bureaucrat, they still will be able to effect change. In our final lesson, actually, we talked about the Holocaust in this respect. The big decisions came from the leaders, but the effectiveness of the extermination came down to the decisions of ordinary men.

Above all, I focused in each section on how what they learned could change their thinking, their lives, and their actions. To do so, I needed to focus on takeaways: what should my students be taking away from the body and life of Socrates, Wittgenstein, or Rawls? As an evidently mediocre philosophy student myself, I am all too aware that asking students to take away an entire lecture is frankly unreasonable. I mean, I taught the course and I have trouble remembering the entire lectures a few years later.

So, I focused on key phrases to repeat over and over again, philosophers boiled down to their essence. For Aristotle’s deduction: “it’s possible to use deduction to ‘understand’ everything, but your premises need to carefully vetted or your understanding will bear no relation to reality”. For Peirce’s pragmatism: “your definitions need to be testable or they need to be reframed to be so”. I ended each lecture by forcing my students to recall the takeaways, so that the takeaways would be the last thing they remembered as they left.

I also distributed mind maps, showing how each philosopher built on the next. The mind maps were distributed at the beginning as empty, then filled in with each subsequent lecture and its takeaways. Not only did this give students a greater understanding of how each philosopher fit into the course, but it gave them a clear sense of progress to see their map filled in.

Philosophy is one of humanity’s greatest achievements. The fact that it’s been relegated to just a collection of neat arguments in a classroom is a tragedy. We live in an age of faulty arguments, misleading news, and a seeming abandonment of even a pretense of ethics. Philosophy can change the way people see the world, if only it’s taught that way.

I’ve discussed the details of how I approached my class below, and also left a link for the course materials I developed. I haven’t touched the material in a couple years, but it’s my hope that others will be able to use it to develop similar philosophy courses. 

A Google Drive link for the course materials I developed

The details of how I structured my course

How to think and takeaways

In “how to think”, we first discussed arguments and their limitations. Arguments are the language of philosophy (and really the language of academia). Being unable to form a philosophical argument and attempting philosophy is like attempting to study math without forming equations. You can appreciate it from the outside, but you won’t learn anything. 

To introduce arguments, I used Socrates, of course. His arguments are fun and counterintuitive, but they also are very clear examples of the importance of analogy and definitions in philosophical arguments. Socratic arguments always started by careful definitions, and always proceeded by analogy. This same structure is omnipresent in philosophy today, and extends to other fields (e.g. legal studies) as well.

To discuss the limitations of this approach I brought in Charles Sanders Peirce’s pragmatic critique of philosophical definitions, and later Wittgenstein’s critique of “language games”, which can easily be extended to analogies. As should probably be clear by my bringing in 19th and 20th century philosophers, I wasn’t aiming to give students a chronological understanding of how argumentation developed in philosophy. I was aiming to give them a tool (argumentation), and show them its uses and limitations.

From there I went onto how to understand new things through deduction and induction. It is easy for philosophy courses, at this point, to discuss deduction, discuss induction, and then discuss why each is unreliable. This leaves the student with the following takeaway: there are only two ways to know anything, and both are wrong. Therefore, it’s impossible to know anything. Given that this is obviously not the case, philosophy is dumb.

I really, really wanted to avoid that sort of takeaway. Instead, I again wanted to give students a sense of deduction and induction as tools with limitations. So I started off students with Aristotle and Bacon as two believers in the absolute power of deduction and induction, respectively. I took care to make sure students knew that there were flaws in what they believed, but I also wanted students to respect what they were trying to do

For deduction, I then proceeded to use Cartesian skepticism to show the limitations of deduction, and then Kantian skepticism to show the limitations of deduction even beyond that. This reinforced the lesson I taught with Socratic arguments: deduction is powerful, but the premises are incredibly important. Aristotle never went far enough with questioning his premises, which is why so much of his reasoning was ultimately faulty.

Discussing the limits of induction was more interesting. From the Bacon lesson, my students understood that induction was omnipresent in scientific and everyday reasoning. It obviously works. So, Hume’s critique of induction is all the more surprising for its seeming imperviousness. Finally, bringing in Popper to help resolve some of those tensions was a natural conclusion to how to think.

At the end of this section (which took 10 classes, 2 hours each), my students had learned fundamentals of philosophical reasoning and its limitations. My students were prepared to start to apply this reasoning in their own academic and personal lives. They were also prepared to think critically about our next sections, how to live and how things should work.

They did not come away thinking that there were no answers in philosophy. They didn’t inherently distrust all philosophical answers, or think that philosophy was useless. It’s possible to understand the flaws in something without thinking it needs to be thrown out altogether. That was the line I attempted to walk.

How to live and takeaways

Once I finished teaching my students how to think philosophically, I embarked on telling my students how philosophers thought they should live. My theme for this segment was a quote I repeated throughout, from Rilke, “For here there is no place that does not see you. You must change your life.”

In other words, I wanted to introduce my students to philosophy that, if they accepted it, would change the way they chose to live their lives. Ethical philosophy now is often treated like the rest of philosophy, something to argue about but not something to change yourself over. In fact, surveys show that professors of ethics are no more ethical than the average person.

This is a damn shame. It doesn’t have to be this way, and it wasn’t always this way. In fact, it’s not even this way for philosophy outside of the academy today. I’ve personally been very affected by my study of existentialism and utilitarianism, and I know stoicism has been very impactful for many of my contemporaries.

That’s the experience I wanted for my students. I wanted them to engage critically with some major ethical philosophies. If they agreed with those ethical philosophies, I wanted them to be open to changing the way they acted. In fact, I specifically asked them to do so.

The ethical philosophers I covered were the ones that I felt were most interesting to engage with, and the most impactful to me and to thinkers I’ve respected.

First, I covered Stoicism. I asked my students to consider the somewhat questionable philosophical basis for it (seriously, it’s really weird if you look it up), but also consider the incredibly inspiring rhetoric for it. If Hume is right, and thoughts come from what you feel and are only justified by logic, then the prospect of controlling emotions is incredibly appealing. Even if he isn’t, any philosophy that can inspire both slaves and emperors to try to master themselves is worth knowing. Plus, the chance to quote Epictetus is hard to pass up.

I then covered Kant, as an almost polar opposite to Stoicism. Kant’s categorical imperative ethics is well reasoned and dry. You can reason through it logically and it’s interesting to argue about, but it’s about as far away from inspiring as you can get. Even the core idea: “Act only according to that maxim whereby you can, at the same time, will that it should become a universal law,” is uninspiring, and it’s very hard to imagine acting as if everything you did was something everyone else should do. As I asked my students, how do you decide who gets the crab legs at a buffet? But my students needed to decide for themselves: did they want to follow a logical system of ethics, or an inspiring one?

We then covered utilitarianism. This, as I’ve mentioned is something I’m biased towards. My study of utilitarianism has changed my life. I donate far more to charity than most anyone I know because of their arguments: keeping a hundred dollars per month more or less for me does not affect my life in the slightest, but it can make an incredible impact on someone less fortunate.

 I presented two sides to utilitarianism: the reasoned, calm utilitarianism of Bentham, and the radical demands of Peter Singer. For Bentham, I asked my students to consider how they might consider utilitarianism in their institutions: can they really say if their institutions maximize pleasure and minimize pain? For Singer, I asked my students to consider utilitarianism in their lives: why didn’t they donate more to charity? 

What I wanted my students to think about, more than anything, was how and if they should change the way they live. Ethical philosophy, as it was taught at Princeton, was largely designed to be left in the classroom (with the noted exception of Peter Singer’s class). Ethical philosophers today, having been steeped in this culture, likewise leave their ethics in their offices when they go home for the night. To me, that’s as silly as a physics professor going home and practicing astrology: if it’s not worth taking out of the classroom, it’s not worth putting into the classroom.

Finally, we covered existentialism.I knew my students, being in their late teens and early twenties, would fall in love with the existentialists. It’s hard not to. At that age, questions like the purpose of life and how to find a meaningful job start to resonate as students start running into those issues in their own life. The existentialists were the only ones to meaningfully grapple with this.

My students came into this section with the tools to understand ethical philosophies. They came out of it with ideas that could alter the course of their lives, if they let them. That, to my mind, is what philosophy should be. Of course, we weren’t done yet. Now that they had frameworks to think about how their lives should be, I wanted them to think about how institutions should be.

How things should work and takeaways

Teaching political philosophy to Chinese students is interesting and complicated, but not necessarily for the reasons you’d expect. I wanted to teach my students the foundations of liberalism, and, when I first taught the coure, I naively thought that I’d be starting from ground zero. I wasn’t. In fact, as I was informed, Chinese students go over Locke in high school, and often John Stuart Mill in college. They’re just thoroughly encouraged to keep that in the classroom.

So, my task wasn’t to introduce students to the foundations of liberalism. My task turned out to be the same as the impetus for this course: to make political philosophy relevant. 

This is actually tough. Almost all political philosophy is, frankly, so abstract to be useless. While I was teaching the course, for instance, Donald Trump was rapidly ascending to the Presidency (and became President right before one of my classes, actually). Locke didn’t have a ton to say about that sort of thing.

But, instead of avoiding the tension in my attempt to make political philosophy relevant, I tried my best to exploit it. I roughly divided up the section into practical political philosophy and idealistic political philosophy. Plato and Locke were idealists, Machiavelli, the Federalists, and Hannah Arendt were practical.

When I discussed Plato and Locke, I wanted to discuss their ideas while making it clear they had zero idea how to bring them about. Plato, for his Republic, needed to lie to everyone about a massive eugenics policy. Locke, for his liberal ideals, came up with an idea of property that was guaranteed to please nobody with property. They’re nice ideas (ish), but their most profound impacts are just how people have used them as post-hoc justifications for existing ideals (i.e. the Americans with Locke’s natural rights).

I wanted my students to understand how Machiavelli exploited the nitty gritty of how ruling actually worked, and did so with a ton of examples (inductive reasoning). Even in his “idealistic” work, Discourses on Livy, he wrote his ideals with a detailed knowledge of what did and did not work in the kingdoms of his time.

For the Federalists, I discussed similarly how much more involved they were with property. The Federalists listed Locke as an influence, but they actually had to build a country. They wrote pages upon pages of justifications and details about taxes, because they knew the practicalities of “giving up a little property to secure the rest of it” often led to bloodshed.

Finally, I ended the section with a discussion of Hannah Arendt’s banality of evil. In a time of increasing authoritarianism around the world, I wanted my students to be aware of the parts of political philosophy that would immediately impact them. They were likely not to be rulers or princes, but they were likely to be asked to participate in an evil system if they entered politics (especially in the China of today). I wanted them to be acutely aware of the bureaucracy that made evil governments possible, and the mindset that could stop them.

My political philosophy section ended with the takeaways that politics can be analyzed with the same thinking tools as the rest of philosophy, and weighed with the same ethics of how to live. The minutiae are complicated, but it is not a new world.

Final takeaways

In the end, the fact that nothing is a new world were my intended takeaways from the entire course. Philosophy underpins everything. It is the grammar of thinking. Scientific experiments, legalistic arguments, detailed historical narratives: all of these methods of making sense of the world have their roots in philosophy and can be analyzed philosophically.

And, if everything can be analyzed philosophically, then you might as well start with your life and the society you live in. It’s not enough to just analyze, though. Philosophy should not be something to bring out in the classroom and then put away when you come home. If the way you live your life is philosophically wanting, change it. If your society is on the wrong course, fix it, even if you can only fix it a little.

There’s nothing worse in education than lessons that have no impact on the student. Likewise, there is no higher ideal for education than to permanently change the way a student evaluates the world. The classroom should not simply be a place of empty rhetoric or even entertainment. To paraphrase Rilke, “For there there should be no place that does not see you. You must change your life.”

[Once again, Google Drive link for all my course materials].

Lessons in business from the golden age of advertising

I previously wrote a post on lessons in marketing from the golden age of advertising in early 20th century America, which I think went pretty well. Unfortunately (but fortunately), there are more great stories from the admen than can be fit in such a restrictive format.

So, here’s my attempt at relaying them. For this post, I relied entirely on The Man Who Sold America, a biography of Albert Lasker from Cruikshank and Shultz, available at fine retailers near you.

If you don’t know how to offer something, ask for something instead

Claude Hopkins was one of the great advertising geniuses of his day. Unfortunately, he was somewhat promiscuous with how he lent his advertising genius, and ended up making a tremendous success out of “Liquozone”, which purported to be a germicide made out of liquid oxygen.

When muckrakers revealed that Liquozone was not pure oxygen but instead just water, Claude Hopkins was disgraced. This left him unhappy and also literally a millionaire.

Albert Lasker wanted to offer Hopkins a job at Lord & Thomas, but didn’t know how to go about doing so (as obviously money wasn’t going to be enough). So he asked around, and found out from a mutual friend that Hopkins was quiet, sensitive, and stingy.

So Lasker came upon a solution. He found out that Hopkins had been reluctant to buy his wife a new electric automobile, as he thought they were too expensive. He arranged a lunch with Hopkins, and showed him a contract from Van Camp for $400,000 contingent on satisfactory copy.

He told Hopkins that he needed his help for the contract, as the copy that he had received from his employees was terrible. If Hopkins would agree to help him, Lasker would buy his wife an electric car as thanks. Hopkins agreed.

The rest was history. Lasker knew he couldn’t offer Hopkins anything he didn’t already have. The only thing he could do was ask.

Experts get clients

Any service business is perpetually concerned with how to get new clients. Advertising is no exception.

One of the best ways to get clients is to be seen as an expert in the field. In the age of the Internet, the best way to do so is to publish a blog, vlog, or Twitter.

Back in the golden age of advertising, it was not quite so easy. So, instead, Lasker put out an ad announcing the creation of an “advertising advisory board”. The ad read: Here we decide what is possible and what is impossible, so far as men can. This advice is free. We invite you to submit your problems. Get the combined judgment of these able men on your article and its possibilities. Tell them what you desire, and let them tell you if it can probably be accomplished.

Of course, the advisory board was entirely made up of Lord & Thomas employees. But this ad worked: they got hundreds of inquiries, rejected the 95% they didn’t want, and took the top 5% as clients.

How to end a partnership with everyone happy-ish

Lasker ended quite a few partnerships over his business career, and he always did so in the same way.

He’d tell his partner, “I’ll buy out your share for 2x, or you can buy out my share for x.”

While it’d still be clear that Lasker wanted to break up, at least people wouldn’t be quite so unhappy about it.

Using a partner to double-team a client

Lasker and Hopkins made a great team. They were excellent at putting the razzle-dazzle on a client.

This started during the introduction. When a Lord & Thomas solicitor would first introduce themselves to a client, they’d speak glowingly of the genius of Albert Lasker. If the client visited the office, Lasker would speak glowingly of the wizardry of Hopkins. By the time the client was pitched by Hopkins, they’d feel like they were getting pitched by the god of marketing himself.

This double-teaming would continue during the pitch. Hopkins would pitch the campaign, and Lasker would remain quiet. If the client disagreed with any part of the pitch, Lasker would automatically side with the client and ask Hopkins to argue his case.

Then, if Lasker actually agreed with the client, he’d say so, and Hopkins would back down. On the other hand, if Lasker actually agreed with Hopkins, Lasker would turn to the client and say, “Well, I guess we’re both wrong.”

This way, the client never felt like they were being sold. Instead, they felt like it was a collaborative process by really smart people who just wanted the best for their product.