[syndicated profile] scottaaronson_feed

Posted by Scott

Every week, I tell myself I won’t do yet another post about the asteroid striking American academia, and then every week events force my hand otherwise.

No one on earth—certainly no one who reads this blog—could call me blasé about the issue of antisemitism at US universities. I’ve blasted the takeover of entire departments and unrelated student clubs and campus common areas by the dogmatic belief that the State of Israel (and only Israel, among all nations on earth) should be eradicated, by the use of that belief as a litmus test for entry. Since October 7, I’ve dealt with comments and emails pretty much every day calling me a genocidal Judeofascist Zionist.

So I hope it means something when I say: today I salute Harvard for standing up to the Trump administration. And I’ll say so in person, when I visit Harvard’s math department later this week to give the Fifth Annual Yip Lecture, on “How Much Math Is Knowable?” The more depressing the news, I find, the more my thoughts turn to the same questions that bothered Euclid and Archimedes and Leibniz and Russell and Turing. Actually, what the hell, why don’t I share the abstract for this talk?

Theoretical computer science has over the years sought more and more refined answers to the question of which mathematical truths are knowable by finite beings like ourselves, bounded in time and space and subject to physical laws.  I’ll tell a story that starts with Gödel’s Incompleteness Theorem and Turing’s discovery of uncomputability.  I’ll then introduce the spectacular Busy Beaver function, which grows faster than any computable function.  Work by me and Yedidia, along with recent improvements by O’Rear and Riebel, has shown that the value of BB(745) is independent of the axioms of set theory; on the other end, an international collaboration proved last year that BB(5) = 47,176,870.  I’ll speculate on whether BB(6) will ever be known, by us or our AI successors.  I’ll next discuss the P≠NP conjecture and what it does and doesn’t mean for the limits of machine intelligence.  As my own specialty is quantum computing, I’ll summarize what we know about how scalable quantum computers, assuming we get them, will expand the boundary of what’s mathematically knowable.  I’ll end by talking about hypothetical models even beyond quantum computers, which might expand the boundary of knowability still further, if one is able (for example) to jump into a black hole, create a closed timelike curve, or project oneself onto the holographic boundary of the universe.

Now back to the depressing news. What makes me take Harvard’s side is the experience of Columbia. Columbia had already been moving in the right direction on fighting antisemitism, and on enforcing its rules against disruption, before the government even got involved. Then, once the government did take away funding and present its ultimatum—completely outside the process specified in Title VI law—Columbia’s administration quickly agreed to everything asked, to howls of outrage from the left-leaning faculty. Yet despite its total capitulation, the government has continued to hold Columbia’s medical research and other science funding hostage, while inventing a never-ending list of additional demands, whose apparent endpoint is that Columbia submit to state ideological control like a university in Russia or Iran.

By taking this scorched-earth route, the government has effectively telegraphed to all the other universities, as clearly as possible: “actually, we don’t care what you do or don’t do on antisemitism. We just want to destroy you, and antisemitism was our best available pretext, the place where you’d most obviously fallen short of your ideals. But we’re not really trying to cure a sick patient, or force the patient to adopt better health habits: we’re trying to shoot, disembowel, and dismember the patient. That being the case, you might as well fight us and go down with dignity!”

No wonder that my distinguished Harvard friends (and past Shtetl-Optimized guest bloggers) Steven Pinker and Boaz Barak—not exactly known as anti-Zionist woke radicals—have come out in favor of Harvard fighting this in court. So has Harvard’s past president Larry Summers, who’s welcome to guest-blog here as well. They all understand that events have given us no choice but to fight Trump as if there were no antisemitism, even while we continue to fight antisemitism as if there were no Trump.


Update (April 16): Commenter Greg argues that, in the title of this post, I probably ought to revise “Harvard’s biggest crisis since 1636” to “its biggest crisis since 1640.” Why 1640? Because that’s when the new college was shut down, over allegations that its head teacher was beating the students and that the head teacher’s wife (who was also the cook) was serving the students food adulterated with dung. By 1642, Harvard was back on track and had graduated its first class.

My most rage-inducing beliefs

Apr. 14th, 2025 03:42 pm
[syndicated profile] scottaaronson_feed

Posted by Scott

A friend and I were discussing whether there’s anything I could possibly say, on this blog, in 2025, that wouldn’t provoke an outraged reaction from my commenters. So I started jotting down ideas. Let’s see how I did.

  1. Pancakes are a delicious breakfast, especially with blueberries and maple syrup.
  2. Since it’s now Passover, and no pancakes for me this week, let me add: I think matzoh has been somewhat unfairly maligned. Of course it tastes like cardboard if you eat it plain, but it’s pretty tasty with butter, fruit preserves, tuna salad, egg salad, or chopped liver.
  3. Central Texas is actually really nice in the springtime, with lush foliage and good weather for being outside.
  4. Kittens are cute. So are puppies, although I’d go for kittens given the choice.
  5. Hamilton is a great musical—so much so that it’s become hard to think about the American Founding except as Lin-Manuel Miranda reimagined it, with rap battles in Washington’s cabinet and so forth. I’m glad I got to take my kids to see it last week, when it was in Austin (I hadn’t seen it since it its pre-Broadway previews a decade ago). Two-hundred fifty years on, I hope America remembers its founding promise, and that Hamilton doesn’t turn out to be America’s eulogy.
  6. The Simpsons and Futurama are hilarious.
  7. Young Sheldon and The Big Bang Theory are unjustly maligned. They were about as good as any sitcoms can possibly be.
  8. For the most part, people should be free to live lives of their choosing, as long as they’re not harming others.
  9. The rapid progress of AI might be the most important thing that’s happened in my lifetime. There’s a huge range of plausible outcomes, from “merely another technological transformation like computing or the Internet” to “biggest thing since the appearance of multicellular life,” but in any case, we ought to proceed with caution and with the wider interests of humanity foremost in our minds.
  10. Research into curing cancer is great and should continue to be supported.
  11. The discoveries of NP-completeness, public-key encryption, zero-knowledge and probabilistically checkable proofs, and quantum computational speedups were milestones in the history of theoretical computer science, worthy of celebration.
  12. Katalin Karikó, who pioneered mRNA vaccines, is a heroine of humanity. We should figure out how to create more Katalin Karikós.
  13. Scientists spend too much of their time writing grant proposals, and not enough doing actual science. We should experiment with new institutions to fix this.
  14. I wish California could build high-speed rail from LA to San Francisco. If California’s Democrats could show they could do this, it would be an electoral boon to Democrats nationally.
  15. I wish the US could build clean energy, including wind, solar, and nuclear. Actually, more generally, we should do everything recommended in Derek Thompson and Ezra Klein’s phenomenal new book Abundance, which I just finished.
  16. The great questions of philosophy—why does the universe exist? how does consciousness relate to the physical world? what grounds morality?—are worthy of respect, as primary drivers of human curiosity for millennia. Scientists and engineers should never sneer at these questions. All the same, I personally couldn’t spend my life on such questions: I also need small problems, ones where I can make definite progress.
  17. Quantum physics, which turns 100 this year, is arguably the most metaphysical of all empirical discoveries. It’s worthy of returning to again and again in life, asking: but how could the world be that way? Is there a different angle that we missed?
  18. If I knew for sure that I could achieve Enlightenment, but only by meditating on a mountaintop for a decade, a further question would arise: is it worth it? Or would I rather spend that decade engaged with the world, with scientific problems and with other people?
  19. I, too, vote for political parties, and have sectarian allegiances. But I’m most moved by human creative effort, in science or literature or anything else, that transcends time and place and circumstance and speaks to the eternal.
  20. As I was writing this post, a bird died by flying straight into the window of my home office. As little sense as it might make from a utilitarian standpoint, I am sad for that bird.
[syndicated profile] scottaaronson_feed

Posted by Scott

In this terrifying time for the world, I’m delighted to announce a little glimmer of good news. I’m receiving a large grant from the wonderful Open Philanthropy, to build up a group of students and postdocs over the next few years, here at UT Austin, to do research in theoretical computer science that’s motivated by AI alignment. We’ll think about some of the same topics I thought about in my time at OpenAI—interpretability of neural nets, cryptographic backdoors, out-of-distribution generalization—but we also hope to be a sort of “consulting shop,” to whom anyone in the alignment community can come with theoretical computer science problems.

I already have two PhD students and several undergraduate students working in this direction. If you’re interested in doing a PhD in CS theory for AI alignment, feel free to apply to the CS PhD program at UT Austin this coming December and say so, listing me as a potential advisor.

Meanwhile, if you’re interested in a postdoc in CS theory for AI alignment, to start as early as this coming August, please email me your CV and links to representative publications, and arrange for two recommendation letters to be emailed to me.


The Open Philanthropy project will put me in regular contact with all sorts of people who are trying to develop complexity theory for AI interpretability and alignment. One great example of such a person is Eric Neyman—previously a PhD student of Tim Roughgarden at Columbia, now at the Alignment Research Center, the Berkeley organization founded by my former student Paul Christiano. Eric has asked me to share an exciting announcement, along similar lines to the above:

The Alignment Research Center (ARC) is looking for grad students and postdocs for its visiting researcher program. ARC is trying to develop algorithms for explaining neural network behavior, with the goal of advancing AI safety (see here for a more detailed summary). Our research approach is fairly theory-focused, and we are interested in applicants with backgrounds in CS theory or ML. Visiting researcher appointments are typically 10 weeks long, and are offered year-round.

If you are interested, you can apply here. (The link also provides more details about the role, including some samples of past work done by ARC.) If you have any questions, feel free to email hiring@alignment.org.

Some of my students and I are working closely with the ARC team. I like what I’ve seen of their research so far, and would encourage readers with the relevant background to apply.


Meantime, I of course continue to be interested in quantum computing! I’ve applied for multiple grants to continue doing quantum complexity theory, though whether or not I can get such grants will alas depend (among other factors) whether the US National Science Foundation continues to exist, as more than a shadow of what it was. The signs look ominous; Science magazine reports that the NSF just cut by half the number of awarded graduate fellowships, and this has almost certainly directly affected students who I know and care about.


Meantime we all do the best we can. My UTCS colleague, Chandrajit Bajaj, is currently seeking a postdoc in the general area of Statistical Machine Learning, Mathematics, and Statistical Physics, for up to three years. Topics include:

  • Learning various dynamical systems through their Stochastic Hamiltonians.  This involves many subproblems in geometry, stochastic optimization and stabilized flows which would be interesting in their own right.
  • Optimizing  task dynamics on different algebraic varieties of applied interest — Grassmanians, the Stiefel and Flag manifolds, Lie groups, etc.

If you’re interested, please email Chandra at bajaj@cs.utexas.edu.


Thanks so much to the folks at Open Philanthropy, and to everyone else doing their best to push basic research forward even while our civilization is on fire.

Page generated Apr. 23rd, 2025 04:35 pm
Powered by Dreamwidth Studios