You can still let your kids roam freely and play. You can self-educate and hire an aristocratic tutor.
I can hire an aristocratic tutor?
Yeah, no kidding. I mean, I make six figures before taxes and I can’t hire a tutor for myself, so I make do with self-education – AKA reading books.
Every time I see Jakob Greenfeld, I find myself surprised that he has his own website. He’s a B2B sales guy, and people like that belong on LinkedIn posting the sort of material that sometimes shows up on r/linkedinlunatics.
But every once in a while I read something of his that suggests he isn’t completely full of shit. But this bit about hiring your own aristocratic tutor if you’re not in the aristocracy isn’t it.
I honestly have no idea who this guy is but the overall commentary on the topic of optimization was interesting.
Quickly skimming through the linked article about aristocratic tutoring it seems a term used to distinguish it from modern tutoring?
However, despite its well-known effectiveness, tutoring’s modern incarnation almost universally concerns specific tests: in America the Advanced Placements (AP) tests, the SATs, and the GREs form the holy trinity of private tutoring. Meaning that contemporary tutoring, the most effective method of education, is overwhelmingly targeted at a small set of measurables that look good on a college resume.
This is only a narrow version of the tutoring that was done historically. If we go back in time tutoring had a much broader scope, acting as the main method of early education, at least for the elite.
Let us call this past form aristocratic tutoring , to distinguish it from a tutor you meet in a coffeeshop to go over SAT math problems while the clock ticks down. It’s also different than “tiger parenting,” which is specifically focused around the resume padding that’s needed for kids to meet the impossible requirements for high-tier colleges. Aristocratic tutoring was not focused on measurables.
Disclaimer: I didn’t read the linked article (mainly because I don’t care about this topic and it wasn’t relevant to the point it was making about optimization)
Yeah I saw that one of things the piece touches on is recommendation algorithms, and if ever you’re looking to discuss problems with those things, I’m on board for that. With that said, I’m not sold on Jakob Greenfeld’s particular angle here.
The TikTokification of social media algorithms flooded everyone’s timelines with soulless content optimized for maximum engagement. […] But the core issue is that on average all of [the alternatives] will suck a little bit more than your default optimized experience.
This kind of talk makes it sound like recommendation algorithms are actually optimized for the user experience (and not just metrics).
I’m not convinced of that. If something doesn’t like me unsubscribe or remove certain topics at will, that’s not optimized. That’s friction, that’s disrespect, that’s annoyance, that’s a damper on my experience. I’m not a Tiktok user myself and was pretty aghast when one of my coworkers told me that simply showing me one weird video would “ruin” their algorithm, in that it would mean automatically subscribing them to more of the same. Conversely, I remember once seeing a post by someone describing how she was watching every video on a topic, even the boring ones she didn’t care for, simply to “train” the algorithm into showing her more of the topic. I’m not putting up with that. A social media site shouldn’t make me do a bunch of homework in place of what should have been a checkbox.
And even setting that aside – people have this notion that recommendation algorithms are actually based on their own choices, but there’s nothing to guarantee that’s true. As others have pointed out before, there’s nothing to stop a corporation from saying, okay, we want this specific video to get a lot of views, so let’s insert it into a lot of people’s feed and treat it as “recommended” for them. How would you be able to tell the difference?
Issues like these are a lot more salient to me than what’s described in the linked piece.
Yes, and if someone wants to complain about how “aristocratic tutoring” isn’t as common as they’d like, I think the clue is in the name.
That bit was one of the more overt moments in the Greenfeld piece that gives it a certain… smell… to me. So while there are aspects of the complaints that I’m sympathetic to, this kind of talk seems to be taking the whole idea in a direction that gives me pause, and I don’t want to endorse an entire line of argument just because I agree with one fragment of it.
I don’t think this is what he’s saying. I think his argument is that the feeds are optimized for engagement, people do get hooked (and the proof of that is that people can’t quit those stupid platforms) but at the same time they don’t realize that they’re giving up the “soul”.
Albums typically contain not just great or even good songs.
Not every movie recommended by a movie critic will hook you right away.
Most risky marketing campaign will fail spectacularly.
But that’s precisely the point.
I think the argument is that in order to preserve the soul of things you have to accept that not everything is going to be perfectly tuned to your taste. And yes, “taste” is another metric.
this kind of talk seems to be taking the whole idea in a direction that gives me pause
So, which direction do you think is taking? And what part don’t you agree with?
Because to me the argument made is pretty simple and a reasonable one:
Albums typically contain not just great or even good songs. Not every movie recommended by a movie critic will hook you right away. Most risky marketing campaign will fail spectacularly. But that’s precisely the point. The imperfections, rough edges and unpredictability are where the soul lives. It’s what makes the human experience human. Everyone wants frictionless experiences these days. But smooth experiences are boring. Rough edges are where personality lives. When you do find something with actual soul, the payoff is way bigger than optimized alternatives.
Do you think differently?
The spiritual language here isn’t clicking for me. Going off the rest of the text, it sounds like “soul” here stands for… exposure to imperfections? In which case, he’s arguing that… recommendation algorithms… successfully eliminate exposure to imperfections?
And they supposedly eliminate friction. I’m not convinced that’s the case; I think they just introduce a different and rather less pleasant sort of friction.
Honestly, beyond the commentary on football, I don’t think this post really says anything beyond some vague populist, bordering on anti-intellectual idea.
The issue with football isn’t the optimization, it’s that the game’s design currently incentivizes defensive play, while what people want to see is aggressive.
This is not the first sport that had this kind of issue; George Mikan was a basketball player who popularized the practice of goaltending, AKA “standing near the hoop and just smacking it out of the air whenever anyone shoots a shot”. Games where he was playing were unbearable because he’d just stand next to the hoop, jump like a spring, and smack any ball that was anywhere near scoring a point. It was optimal, impressive, and inhumanly boring. He, ironically, was one of the people to push for a ban for the practice because he found it to make the game less interesting.
Sports and classic games changing because of a strategy no one thought before (or didn’t think was physically possible before) is common and regular, and is what likely will occur.
The rest of the post though?
Even games like poker and chess are now dominated by robot-like humans executing expected-outcome-optimal algorithms.
Yes, that’s what those game always were. Do you think chess players were intentionally playing worse or something? Chess players have had books and entire memorizations for the first 10 moves of a game for decades now, you really think they won’t start charting out the rest?
The education system is no longer about curiosity and learning. It’s about optimizing test scores and gaming the system.
In what decade was it anything but gaming the system. The day where students cared more about passing than listening in class was the first day they stepped into class, realizing that they were 12 and could not give less of a fuck about what authority wanted them to do.
Fine-dining restaurants have become formulaic exercises in producing Michelin-optimal menus.
Oh no, the places you pay a lot of money for are trying to make the best meals on the planet? That’s… a bad thing?
This post vaguely gestures towards some “human core” missing but doesn’t actually elaborate on what it is, and just vaguely hints that we need to stop “looking this deeply” into things. “Why be optimal? Just keep doing what we were doing! There’s no need to keep advancing, where we are is fine.”
Because you know the funny part?
If “you need rough edges to make an experience better”, the optimal strategy is to add them back. Your idea of “rough edges” are nothing more than contrarianism.
I should probably move this to my blog, that really got some energy out of me.
I don’t think soul stands for imperfection. I think the argument is that by optimizing everything we tend to average out everything all the time. So everything becomes kinda meh. I could be wrong but I think he’s arguing that it’s better to live in a world with 3/10 and 9/10 than in a world filled will 6.5/10
I don’t think this post really says anything beyond some vague populist, bordering on anti-intellectual idea.
Now I’m so curious where you got the populist, anti-intellectual vibe ahah
It’s so interesting how different people read the same piece and have entirely different reactions.
I don’t even care about the piece itself, I’m just interested in how different people can have views that are so completely different on the same piece of text.
I got the anti-intellectualist vibe from the fear and demonization of “optimization”, or how it’s more accurately presented, progress at all. While he does mention stuff like shows being done for “maximum engagement,” he also complains about people being good at chess and high rated restaurants making high rated food, and adds that:
Once you start dissecting something, put it under the brutal microscope of optimization, and start measuring and maximizing every little detail, you inevitably start creating soulless pieces of slop.
It seems to me like the problem outlined isn’t the fact that the “optimization goal” is money over people, but rather this optimization existing at all. The problem starts “once you start dissecting something”. He implies that striving for improvement/perfection is nonhuman, as “[Imperfection]'s what makes the human experience human”.
This reads to me as fairly bogstandard anti-intellectualism: the fear of change, specifically regarding what is perceived as being done for the ideals of progress, under the justification that it loses “the human touch.”
The populist rhetoric is fairly entangled with this “losing the human touch” argument in my opinion, as his solution pits the “intellectual elites” who optimize everything against the everyman who needs to intentionally… be stupider? I think?
You can play Short Deck poker and Fischer Random chess.
You can still let your kids roam freely and play. You can self-educate and hire an aristocratic tutor.
He fearmongers against playing poker well and the educational system. Like, come on.
This is all so fascinating. Because I had an entirely different read of the whole argument.
It seems to me like the problem outlined isn’t the fact that the “optimization goal” is money over people, but rather this optimization existing at all.
I personally agree that in some contexts optimization as a goal is bad and should not exist. When you make anything with artistic, creative value, you don’t want to chase optimization. because optimization makes you do things that are, ultimately, quite stale and boring. And that is the whole argument I saw the piece making.
As for the argument about education, I do agree that IF (and it’s a big IF) the goal is optimize the learning methods to pass standardized tests, that’s bad. But I also don’t live in a place that has standardized tests so my experience with education is obviously different.
I think the broader issue is not optimization per se, but rather what are we optimizing for. And too often the modern world is designed to optimize either money making or “success”. And over time, that’s bad. Because if all we do is converging towards those goals, then everything risks becoming the same shit.
The whole argument is basically what’s happening on YouTube where everyone is making the same stupid thumbnails with their idiotic faces on it and red arrows. It’s also the issue with the whole minmaxing culture where everyone converges towards the same optimal strategy.
Which yeah, in some contexts is what you want. But when it comes to culture, to art, to personal expression, what you want is people trying shit. You want people take swings, you want the rough edges.
That’s the thing - I agree with you, but heavily disagree with the way it’s phrased, because I don’t think the original post agrees with what we are both agreeing about.
Wow, that was a sentence. Let me explain.
I think the broader issue is not optimization per se, but rather what are we optimizing for. And too often the modern world is designed to optimize either money making or “success”. And over time, that’s bad. Because if all we do is converging towards those goals, then everything risks becoming the same shit.
I agree completely. The issue is always appealing to the lowest common denominator, or chasing metrics instead of trying to create a good product. The issue isn’t “optimization” generally, it’s project leads (whatever the project may be) who lose touch with what their project is ostensibly supposed to do: teachers who chase grades rather than learning, for example.
But the post complains about more than that. It complains about optimal play in games, because “it’s wrong.” It complains about fine dining, because it’s “inhuman.” I know I keep beating the drum of these two examples, but it’s exactly these outliers in rhetoric that you need to watch out for, because the person arguing it was sure they made equal sense. If you read a statement and one of the examples is completely off, think of what the person making the argument is actually saying: what would make this example similar to the rest?
As a comical example, this exchange from the first season of smiling friends exemplifies what I’m trying to make you spot:
You must all go and spread the word of FROWNING! Pretty soon, EVERYONE will embrace sadness, and there’ll be no such thing as SMILING! And once that’s done, we can finally eradicate ALL THE PUERTO RICANS ON THE PLANET!
Crowd: YEEAAHHH- Yea- ah- wuh? what’d he say?
…I mean, make everyone on the planet FROWN!
The joke here is that, despite the crowd agreeing with 90% of the rhetoric, the person talking accidentally slipped up and admitted a part of his worldview that no one else is supporting. Also I really like smiling friends so this felt like a good way to exemplify that.
But to give another example: there is absolutely nothing stopping you from “let[ting] your kids roam freely and play”, in fact, that’s what most educators suggest! That’s the “optimal” way to let them grow! The reason this person brings it up despite seeming contradictory is because the argument at its core comes from being anti-educators and anti-establishment, not anti-money chasing. There’s a big rumble online about how “KIDS AREN’T ALLOWED TO BE KIDS ANYMORE” by some… well, fascist-adjacent influencers, let’s call it. The same is true with the push against movie critics, who are the most vocal about how the modern slopization of movies is a bad thing. The issue the poster has with movie critics isn’t that they’re “optimal”, because god knows they disagree with eachother all day. It’s that they’re “the elite.”
We both agree on one issue, but the post is talking about something else, and expressing it through language that appeals to the grievances of “the common man”; classic populism.
Well, I guess it depends what the goal is I suppose. For example, I’ve been following basketball for more than 25 years at this point and I saw the game evolve over time. The current play style clearly is optimized for some metrics but as an entertainment product (which is what professional sport is) it has become incredibly boring to watch most of the time. Which isn’t to say players are not skilled or that the game is bad. It’s just boring since most games all feel the same since everyone is trying to play the same optimal way.
If the goal is to provide a fun entertainment product, then I guess you can consider that type of optimization wrong. Wrong in the sense that you’re optimizing for the wrong thing.
The fine dining example also makes sense to me. I know people in the restaurant industry. I also know people who have worked at starred restaurants. There are a bunch here. If the goal is “get a star” then the risk of simply emulating what the other starred restaurants are doing is very high. That’s what I think he’s referring to in the piece. But this is an even more complex issue since it has a multitude of factors.
But now, your last point is a lot more interesting. As I said, I honestly didn’t see the populist, anti-intellectual argument in there, and maybe this is just because we are all primed to read different things into what we are presented based on the reality we live in. But I have a question for you:
The joke here is that, despite the crowd agreeing with 90% of the rhetoric, the person talking accidentally slipped up and admitted a part of his worldview that no one else is supporting.
What’s wrong in that case with agreeing on 90% and rejecting that 10%?
Nothing, but you should be spotting that 10% and realizing what it implies about the rest of the argument. In some cases, that 10% is alarming enough that it taints the rest of the argument, which for me is the case with this post.
There are many times where a single sentence makes me realize me and another person are complaining about wildly different things under the same rhetoric. To give an extreme example - someone complaining about the consolidation of power and money at the hand of a handful of individuals could be talking about the openly known monopolies led by a handful of extremely wealthy people, or they could be talking about The Jews™. If it’s the latter, the rest of the argument does not matter anymore, and endorsing them at all is out of the picture.
This post is far from being the only example of normalized anti-intellectualism in online spaces (gestures broadly at anti-AI discussion that instead of discussing layoffs and corner-cutting talk about “the soul of art”), but it’s becoming prevalent enough that I don’t want anything to do with it.
Got it. I guess we are just wired to work differently in this case then because for me that’s not how it works. I take arguments for what they are and I try to analyze the merits of the argument itself. I also don’t see this post as normalized anti-intellectualism (because if I did I would not have spent time time posting it here).
To give an extreme example - someone complaining about the consolidation of power and money at the hand of a handful of individuals could be talking about the openly known monopolies led by a handful of extremely wealthy people, or they could be talking about The Jews™. If it’s the latter, the rest of the argument does not matter anymore , and endorsing them at all is out of the picture.
I agree with this. Let me ask you a question though: in this example, if you then ask concrete follow up questions as for what we should, in practice, do to tackle the issue of consolidation of power and money, if those concrete actions are reasonable, does it matter if the person proposing them had some entirely idiotic idea about The Jews™?
Because that to me is usually what matters: what really happens. “Why” something happens matters a lot less to me compared to “what” happens and the consequences of that what.
I’m cool if we do the right thing motivated by the wrong reason, if that make sense.
Yea. It doesn’t make their initial argument invalid, but it does make me reluctant to say “I agree with you,” because I do not trust where they’re going to take these ideas. If their original argument has dogwhistles or other commentary that only makes sense within the context of said problematic mindset, it makes the argument entirely invalid for me.
I’m cool if we do the right thing motivated by the wrong reason, if that make sense.
I support that too. I’d just rather not push that wrong reason onto others.
I also don’t see this post as normalized anti-intellectualism (because if I did I would not have spent time time posting it here).
I figured lol
The “aristocratic tutor” line was one of those moments for me. Not that I had been fully agreeing up until that point, but that’s where something implicit became a little more explicit. I clicked the link and didn’t read the whole thing through, but I read enough to see that it’s about “the decline of genius,” i.e. that we (humanity) aren’t producing “geniuses” anymore, and as a part of this argument the author cites another person who makes reference to, of all things, Oswald Spengler’s Decline of the West.
If we’re making things kinda meh then we’re not optimizing.
Or at least, not optimizing for enjoyment.
In any case, the idea of recommendation algorithms leading to a lot of stuff converging toward bland generic sameness is, I think, an idea that can actually hold water.
Yes!
If we’re making things kinda meh then we’re not optimizing.
The people who are running most of these things are definitely optimizing, but not for enjoyment. They’re optimizing for revenues, retention, time spent rotting in front of a screen, you name it. But I think that was kinda the whole point of the argument. Most of what makes up culture seems to be optimized in that way. And I don’t entirely agree but it’s certainly true for mainstream culture.