Skip to content

Commit

Permalink
Add content to quartz v4, links are working locally
Browse files Browse the repository at this point in the history
  • Loading branch information
kasrakoushan committed Oct 7, 2023
1 parent 3268d45 commit f4833e6
Show file tree
Hide file tree
Showing 19 changed files with 207 additions and 5 deletions.
Empty file removed content/.gitkeep
Empty file.
10 changes: 10 additions & 0 deletions content/ad-hoc-hypothesis.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
---
title: Adding ad hoc hypothesis to resist falsification
enableToc: true
---

In the Popperian view of epistemology, the key ingredient of knowledge-growth is _criticism_. To contribute to the growth of knowledge, the creators of theories should _seek out criticism_ of their theory. One way to do this is to seek out logical inconsistencies in it. Another way is to conduct experiments that would either falsify or corroborate the theory.

The anti-scientific stance in this view would be to _resist_ criticism. One common way to do this (you see this often with conspiracy theorists) is to keep modifying the theory every time some evidence arises to contradict it. Say you've come up with some equations for the motion of all objects, and then you find that pink feathers do not follow this equation. The conventionalist (as Popper calls it, someone who wants to preserve the existing theory) will say, "no worries, just add an additional hypothesis to the theory that goes: pink feathers will move according to this other equation Y."

As more and more counterexamples come up, the conventionalist adds more and more exceptions to their theory. At some point this becomes futile. The better approach is to consider the old theory dead, and to try to come up with a new theory that explains both the old and the new observations.
6 changes: 6 additions & 0 deletions content/ai-discourse.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
---
title: Current Understanding of AI Discourse
enableToc: true
---

WIP. Where I talk about my current understand of AI discourse.
37 changes: 37 additions & 0 deletions content/hoel-harris-utilitarianism.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
---
title: Contra Harris Contra Hoel on Consequentialism
enableToc: true
---

In [Why I am not an effective altruist](https://erikhoel.substack.com/p/why-i-am-not-an-effective-altruist), Erik Hoel criticizes the philosophical core of the EA movement. The problem with effective altruism—which is basically utilitarianism—is that it leads to repugnant conclusions (coined by Derek Parfit in *Reasons and Persons*):

- Example: strict utilitarianism would claim that a surgeon, trying to save ten patients with organ failure, should find someone in a back alley, murder them, and harvest all their organs.
- Example: utilitarianism would claim that there there is some number of "hiccups eliminated" that would justify feeding a little girl to sharks.

For a lot of utilitarian thought experiments, the initial experiment has a clear conclusion (e.g. you should switch the trolley so fewer people die), but then all the variations of it no longer have a clear conclusion, because morality is not mathematics.

The core mistake of utilitarianism is that it assumes that *the moral value of all experience is a well-ordered set*—that you can line them all up on a single scale, and thus all you need to do to figure out which action is better is some addition and multiplication. In a [follow up post](https://erikhoel.substack.com/p/we-owe-the-future-but-why?utm_source=substack&utm_medium=email), Hoel adds: the mistake of utilitarianism is the view that morality is *fungible*, that moral value can be measured as mounds of dirt. But in reality, moral actions are not fungible.

(Note: the fact that moral values are not fungible does not preclude us from making comparisons between actions, e.g. a stubbed toe and the holocaust are entirely different *categories of evils*, but we can still compare them and say the latter is worse. Hoel’s point, though, is that not all moral actions can be compared to each other in an order-preserving manner.)

One way to avoid repugnant conclusions is to add more and more parameters to your utilitarian calculus. Hoel claims this is futile, because you'll never have enough parameters. (Popper had a term for this: adding [ad hoc hypotheses](/ad-hoc-hypothesis.md)).

> Just as how Ptolemy accounted for the movements of the planets in his geocentric model by adding in epicycles (wherein planets supposedly circling Earth also completed their own smaller circles), and this trick allowed him to still explain the occasional paradoxical movement of the planets in a fundamentally flawed geocentric model, so too does the utilitarian add moral epicycles to keep from constantly arriving at immoral outcomes.
>
Hoel thinks the EA movement has done a lot of good; he just disagrees with its philosophical underpinnings. And it still leads to some repugnant-ish conclusions, e.g. that you should just be a stock broker to make more money and donate it.

> One can see repugnant conclusions pop up in the everyday choices of the movement: would you be happier being a playwright than a stock broker? Who cares, stock brokers make way more money, go make a bunch of money to give to charity. And so effective altruists have to come up with a grab bag of diluting rules to keep the repugnancy of utilitarianism from spilling over into their actions and alienating people.
>
### Sam Harris’s criticism

Hoel made a recent appearance on the Sam Harris podcast, in which he and Harris kept butting heads on the same basic point. Harris: all moral theories, even those that are not consequentialism, ultimately make their claims to moral value based on *consequences*. If you ask a deontologist why they advocate for some principle or other, their argument will be framed in terms of the *consequences* of people following that principle. Same thing with virtue ethics, or any other moral system.

For Harris, all the criticisms that Hoel makes still boil down to consequences. It's all just more consequences!

This is technically true, but at some point it begins to sound vacuous. You could describe every moral theorem in terms of eventual consequences, but it's not especially useful to do so. It's like saying "It's all about goodness! Morality is all about goodness!” Hoel made this point in his piece: once you strip utilitarianism enough to no longer do strict mathematics around hedonic units, and instead just say "do the most good you can, where good is defined in a loose, complex, personal way", you’ve arrived at something no one can disagree with.

Choosing a moral theory requires us to answer the question of *how best to think* about *how we should act*. And it just seems that if you think about morality in terms of *maximization*, you are far more likely to engage in morally questionable behavior than if you just thought about it in terms of, say, *adhering to principles*, or *cultivating virtues*. Who are the people we think of as ethical heroes, as role models in the moral life? Gandhi, Mandela, Jesus, the Buddha. Are they *rationalistic maximizers*? Or are they just, really principled and virtuous people?

There is one place where I do agree with Harris, and it's around preferring human consciousness over artificial consciousness. Hoel claims that a pitfall of consequentialism is that it doesn't give us any reason to prefer our own survival over, say, the survival of some alien or artificial intelligence. And to that I would say: if we *really* come to the conclusion that artificial intelligences can have inner lives as rich and as laden with suffering and happiness as our own, and if they are as concerned with ethics as we are, then why *should* we prefer our wellbeing over theirs?
Binary file added content/images/action-potential.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added content/images/dns records.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added content/images/eeg-measurement.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added content/images/hebb-cell-assembly.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added content/images/neuron-parts.jpeg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added content/images/neuron-parts.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added content/images/neuron-resting-potential.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added content/images/pet-image-sleep.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added content/images/quartz layout.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added content/images/quartz transform pipeline.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
12 changes: 12 additions & 0 deletions content/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
---
title: This site is WIP
---

### My interests

- Neuroscience.
- [Current understanding of neuroscience](/neuroscience.md)
- [Are lesions a good proxy for brain function?](/lesions.md)
- Meditation
- Writing
- Philosophy. See [Contra Harris contra Hoel on consequentialism](/hoel-harris-utilitarianism.md).
22 changes: 22 additions & 0 deletions content/lesions.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
---
title: Are lesions a good proxy for brain function?
enableToc: true
---

One of the recurring themes in our understanding of the brain is the study of *lesions*. Or perhaps more broadly, regions of the brain that have either been damaged (due to disease, injury) or removed surgically.

Most of the time that we say "X region of the brain is responsible for Y function", it's based on a study of damage or removal of that region. A classic example is [patient HM](https://en.wikipedia.org/wiki/Henry_Molaison), who had his hippocampi removed to mitigate seizures, and subsequently lost the ability to form new declarative memories. Based on this, neuroscientists speculated that the hippocampus is involved in the formation of durable memories. Similar reasoning applies to [Broca's aphasia](https://en.wikipedia.org/wiki/Expressive_aphasia) and [Wernicke's aphasia](https://en.wikipedia.org/wiki/Receptive_aphasia), which are conditions that have led us to conclude that particular brain regions are involved with specific sub-functions of language, like generating speech.

Unfortunately, arguments like this don't give us a robust understanding of *how* any of these brain functions work. The fact that a function Y is impaired after we remove region X of the brain does *not* imply that region X is the exclusive zone in which function Y is executed; and further, it says nothing interesting about *how* Y is accomplished. There could be many other explanations:
- region X could be one of several regions involved in accomplishing Y
- region X could be responsible for some "prerequisite" task that enables function Y, without actually being involved in Y

At best, exhaustive analysis of how damage to different regions affects behavior can give us a high-level map of how the brain works. But as [David Poeppel notes](https://www.youtube.com/watch?v=-1su5DWUYXo): *a map is not an explanation*. We have a complete map of the C elegans nervous system down the last neuron and synapse, but we do not have an understanding of how this map generates the worm's behavior.

In a [2017 study](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005268) provocatively titled *Could a Neuroscientist Understand a Microprocessor?*, researchers applied neuroscience techniques (like creating lesions and measuring electrical activity) to a microprocessor, and demonstrated that such techniques gave no fundamental insight on the actual workings of the processor. As summarized in Poeppel et al's [*Neuroscience Needs Behavior*](https://pubmed.ncbi.nlm.nih.gov/28182904/):

> The study poses the question of whether a neuroscientist could understand a microprocessor. They applied numerous neuroscience techniques to a high-fidelity simulation of a classic video game microprocessor (the ‘‘brain’’) in an attempt to understand how it controls the initiation of three well-known videogames (which they dubbed as ‘‘behaviors’’) originally programmed to run on that microprocessor. Crucial to the experiment was the fact that it was performed on an object that is already fully understood: the fundamental fetch-decode-execute structure of a microprocessor can be drawn in a diagram. Understanding the chip using neuroscientific techniques would therefore mean being able to discover this diagram. In the study, (simulated) transistors were lesioned, their tuning determined, local field potentials recorded, and dimensionality reduction performed on activity across all the transistors. The result was that none of these techniques came close to reverse engineering the standard stored-program computer architecture (Jonas and Kording, 2017).

Poeppel argues that to truly understand the brain, we need not just ad hoc manipulations and measurements of brain activity, but robust *theories*—in particular, algorithms—that would explain *how* the brain executes functions like speech, and then test those theories with measurements of the brain. We can't go the other way around.

All that said, the behavioral consequences of lesions are interesting. First, the fact that entire regions of the brain can be destroyed without killing the organism feels surprising. If, in contrast, you removed a giant piece of your stomach, I'm pretty sure you would quickly die. Perhaps the takeaway is: most of what the brain does is not immediately crucial for survival. (Except the brain stem.) And the consistency we find between brain damage and behavioral deficits does imply that the brain has *some* degree of modularity.
Loading

0 comments on commit f4833e6

Please sign in to comment.