![]() |
Not for the faint of art. |
Complex Numbers A complex number is expressed in the standard form a + bi, where a and b are real numbers and i is defined by i^2 = -1 (that is, i is the square root of -1). For example, 3 + 2i is a complex number. The bi term is often referred to as an imaginary number (though this may be misleading, as it is no more "imaginary" than the symbolic abstractions we know as the "real" numbers). Thus, every complex number has a real part, a, and an imaginary part, bi. Complex numbers are often represented on a graph known as the "complex plane," where the horizontal axis represents the infinity of real numbers, and the vertical axis represents the infinity of imaginary numbers. Thus, each complex number has a unique representation on the complex plane: some closer to real; others, more imaginary. If a = b, the number is equal parts real and imaginary. Very simple transformations applied to numbers in the complex plane can lead to fractal structures of enormous intricacy and astonishing beauty. |
#stop5g There is a town in West Virginia, not too far from where I live across the state line, called Green Bank. Green Bank was apparently named after the green bank of a nearby river; as with many such names, I think you'd be unsuccessful in finding any such greenery. Kind of like how a subdivision is defined as a place where they cut down all the trees and name the streets after them. The reason Green Bank is known outside of an obscure county in an economically depressed area is because of the Green Bank Telescope, a large radio telescope used for astronomical observations and probably sneaky spy stuff too. Thing is, radio astronomy is notoriously subject to interference from Earthbound electromagnetic radiation. I say "electromagnetic radiation," because that's what it technically is, but it unfortunately contains the word "radiation," which is ooga-booga scary. The light that we see is technically radiation, and it's mostly harmless—we don't put on sunscreen to shield ourselves from visible light, but from invisible ultraviolet. So, yes, some portions of the EM spectrum are harmful. Some are not. Has to do with wavelength and energy. So Green Bank, WV is part of the National Radio Quiet Zone, ![]() For some people, I know, that sounds like Paradise Itself. Reportedly, some people have moved to the town solely because of their delusional belief that they're allergic to EM waves. For me, that would be the Third Circle of Hell. Yes, only the third; I can think of worse places to live than West Virginia. I know I shouldn't say that as a lifelong Virginian, but yes, objectively, there are worse places. Cleveland, e.g. But, despite the relatively minuscule number of crazies flocking to GB to escape the evil waves, there are a whole lot more people who bought into the lie that 5g cellular networks are dangerous. And those people are everywhere. If you don't have good mobile service in your area of the US, and it's not part of the NRQZ, you can blame them. And you can also blame the people who want good cell coverage, but refuse to allow towers in line of sight because they're "ugly." Sometimes, the NIMBYs get compromised with cell phone towers that are camouflaged to look like very tall trees. You see them sometimes from interstates: rising mightily over the surrounding forest, a few desultory green branches stuck into the tower's structure. They don't fool anyone. They're like incompetent snipers in ghillie suits. ![]() I'll just end with this: it was at Green Bank that Frank Drake came up with his famous equation-that's-not-an-equation for estimating the probability of tech-capable extraterrestrial life. I've talked about it quite a lot in here; just do a search on the blog for "Drake Equation." This is really irrelevant to anything, but then, so is opposition to 5g. |
Just in case anyone still doubts the value of science, here's incontrovertible proof of it, from Physics magazine: Cooking Flawless Pasta ![]() Scientists have pinpointed energy-efficient ways to cook al dente pasta and developed an infallible recipe for the perfect cacio e pepe sauce. I would only object to the adjective "infallible" there. No matter how refined the recipe, someone will find a way to muck it up. A bowl of steaming hot pasta covered in your favorite sauce and dusted with a healthy dose of parmesan cheese comes high on the list of ultimate comfort foods. I don't believe in the concept of "comfort food." If your food doesn't make you feel good, re-evaluate your life choices. But cooking that pasta to perfection can be more difficult than seemingly simple recipes imply. Recipes can never tell the full story. For one thing, pasta texture is dependent on things like water hardness, ![]() So one needs to understand one's local environment and modify recipes accordingly. In one study, Phillip Toultchinski and Thomas Vilgis of the Max Planck Institute for Polymer Research, Germany, studied whether perfectly al dente spaghetti could be prepared in a more energy-efficient way. Yes, "perfectly" is also a subjective concept, but as some of these studies note, you can decide on optimal, measurable, objective parameters and test to determine what process results in a product closest to those parameters. In a second study, Matteo Ciarchi and Daniel Busiello of the Max Planck Institute for the Physics of Complex Systems, Germany, Giacomo Bartolucci of the University of Barcelona, Spain, and colleagues developed a recipe for making perfect cacio e pepe, a three-ingredient cheese sauce that is surprisingly easy to mess up. Just how many Max Planck Institutes are there, anyway? There were other physicists, you know. Also, please take note that none of these researchers did their work in Italy. If I were writing the grant proposal, I'd leave room for a lengthy research trip to the country that's known for its pasta. You know, for science. Judging solely by the names, though, at least some of these researchers might have been Italian. “It is very difficult to make this sauce,” says Busiello. “You are almost always doomed to fail.” With that attitude, though, maybe Russian. The study by Toultchinski and Vilgis was inspired by a brouhaha over a 2022 Facebook post by physics Nobel laureate Giorgio Parisi. In that post, Parisi suggested that chefs could reduce the energy needed for cooking pasta using a “heat-off-lid-on” method. Oooh, yeah, that might stir up a good bit of anger. But chefs questioned whether this method could achieve al dente pasta—pasta that is soft on the outside and crunchy at its core. Well, chefs, all you had to do was try it. The article goes on to describe how the researchers themselves tried it, and the results. Vilgis says that while these results show that pasta can be cooked in a more energy-efficient way, when it comes to taste and texture, there is no substitute for the tried-and-tested method. “If you want perfect al dente pasta, you have to cook it the traditional way,” he says. Sometimes, science shows that we've been doing something wrong all this time. It's designed to test assumptions we've made from intuition, tradition, and the self-contradictory "common sense." But, sometimes, it turns out the other way; that the traditional method is, after all, the best. And that's good, too. The second pasta study by Ciarchi, Busiello, Bartolucci, and colleagues focused on how to make a popular but tricky sauce called cacio e pepe. For this sauce, pecorino cheese is mixed with pasta water and black pepper to make a glossy emulsion that coats the pasta in cheesy goodness. Call me ignorant, but I had no idea "cheesy goodness" was a scientific term. As before, methods and results are provided; I'm skipping over a bit. “The team came up with a really practical recipe to getting the perfect sauce every time,” Fairhurst says. He adds that these kinds of studies—where physicists apply their knowledge to food problems—can really help consumers engage in science. “It’s everyday science you can do in the kitchen; you can’t do that with particle physics problems.” Well, now, I suppose that depends on your kitchen, doesn't it? |
#stressed There is a myriad of ways to denote stress in online writing. One can bold the desired portion of text, or use italics—though the latter runs the risk of being confused with certain styles of quotation, such as the one I use in this blog. Another option is to underline the stressed text. Classically, an underline denotes a book title, but I think that style has fallen out of favor. Besides, if you do use it for that, the words generally follow an established capitalization pattern; this and other context clues should be sufficient to distinguish text underlined for stress from text underlined for a book title. There are also numerous colors that can be used to make words and/or phrases stand out. And let's not forget the potential of increasing a word or phrase's size, though it must be remembered that this can cause problems with formatting and flow. Don't overlook the impact (pun intended) of font type, either: a word in Impact will definitely stand out amidst the standard font here; Times can give your stressed bits an old-time serif look; and Comic, while sneered at by font snobs, has its uses, especially as the long-sought sarcasm font. Let's not forget the trick of making certain portions of text *stand out* using only ~keyboard~ strokes, or even ![]() ![]() Nor should Capital Letters be used for stress. I use them sometimes to indicate Important Ideas, but I'm probably wrong to do so; it's a kind of stylistic signature for me. Also, the technique of typing in ALL CAPS is frowned upon except in rare circumstances, and definitely not to be used exclusively because then it makes you look like you're writing your manifesto. To *really* make your desired text stand out, you can use almost any combination of these techniques. The possibilities are... well, not limitless, but numerous, and a better mathematician than I am could probably enumerate them more precisely. But as I said up there at the beginning, "there is a myriad of ways to denote stress;" the word 'myriad' comes from a Greek word meaning ten thousand, and ten thousand is a pretty large number, more combinations than you probably truly need. It's a good number, so we'll go with that for now. The stress on 'myriad,' incidentally, falls on the first syllable. |
Okay, here we go again. From Big Think: What if we’re alone? The philosophical paradox of a lifeless cosmos ![]() If humanity lives in an otherwise barren Universe, we’ll have to forge philosophy that fills the void. As there isn't any credible, unambiguous evidence of extraterrestrial life, the headline seems to be arguing "paradox" from the widespread belief of "there just HAS to be other life out there," which in turn comes from the essentially misanthropic "we're not special" argument. The idea that we might have cosmic neighbors has captivated the human imagination for decades. It’s not just sci-fi enthusiasts who ponder the possibility of extraterrestrial intelligence (ETI) — the general public seems to lean strongly toward the belief that we’re not alone. And we're off to a bad start in the lede, which makes no distinction between "life" and "extraterrestrial intelligence." Now, one could argue that intelligence is a quality of all life, from bacteria to blue whales, but that's not what ETI usually connotes. ETI refers, in the public imagination at least (and the article is written for the public, not for scientists), to technology-wielders like ourselves. This is one reason I don't like to use "intelligence" in this context. Another is that there are demonstrably other intelligent species right here on Earth, and none of them seem inclined to make rockets or broadcast radio. So, for the purpose of my discussion here, I'll use ETL as shorthand for extraterrestrial life, and ETT for extraterrestrial technology. Both are entirely hypothetical, but, as I've noted dozens of times, ETL is far more likely than ETT (but it's almost certainly a prerequisite for it), and there's nothing about evolution that requires the development of a species that likes to send robots to neighboring planets. As for "the general public seems to lean strongly toward the belief that we're not alone," remember, "the general public" is pretty damn gullible when it comes to reports of strange lights in the sky and whatnot. Columbia University Professor David Kipping often finds that, when discussing astronomy and the potential for life elsewhere in the Universe, people almost universally insist, “Surely we can’t be the only ones!” Ah, yes. Argument from incredulity. It's a big universe. I mean, it's really, stupendously, mindbogglingly, incredibly big. So I understand the insistence that there's ETT out there somewhere. I actually share that belief. But just as you humans find it difficult to grasp just how horrendously huge the universe is, you also have little grasp of tiny probabilities. If you did, none of you would ever play the lottery. Oops. I meant "we" and "us," fellow humans. As we'll see, though, the chances of ETL alone are probably much higher, and it wouldn't surprise me a bit if we found some. Occam’s razor nudges us toward the notion that “life out there” is the easiest explanation — it just feels right. Snort. The interpretation of Occam's Razor is: "Entities must not be multiplied beyond necessity." It may "feel right," but throwing ETT or even ETL out there as a given is the literal definition of multiplying entities beyond necessity. That said, Occam's Razor isn't proof of anything. It's meant to guide hypotheses and research, not proclaim that the simplest explanation is definitely the answer. If you hear a funny noise from your car's engine, for example, you start by looking at the simplest possible explanation, maybe a loose belt. But if all the belts are tight, then maybe it's because your engine is fried: a much more complicated fix. The principle of mediocrity chimes in, reminding us that our little corner of existence is probably not all that unique. I can't argue against that bit. And the Copernican principle gives a knowing nudge, sweeping away humanity’s old, self-centered fantasy of cosmic importance. The difference between that and the idea of ETL is that we have concrete proof that Earth isn't the center of the universe, whereas there's no proof yet of ETL. To believe we’re alone in this vast, wild expanse feels not only improbable but strangely outdated, like clinging to some universal map where Earth is still at the center. Now, I have to admit that the idea of humanity's uniqueness is a religious tenet to many people, and I have no desire to argue one side or the other from a religious perspective. I want there to be ETT just to poke the eye of dogmatic religion, but that doesn't mean it's there. With the discovery of exoplanets, we’ve learned that our galaxy overflows with diversity: Billions of planets orbit stars in the so-called habitable zone, where conditions might support liquid water. Which is a fine argument in favor of ETL. Once, Earth’s oceans seemed unique; now, hidden seas on moons like Europa and Enceladus suggest that watery worlds may not be so rare after all. But let's not forget that Europa and Enceladus both orbit gas giants outside the Sun's habitable zone. Still, that only increases the chances for ETL to gain a foothold, right? Well, maybe. Venus and Mars are both in the Sun's habitable zone, and we still haven't found unambiguous evidence for life, extant or extinct, on either. As Professor David Kipping aptly points out, the data paints a tantalizing picture — just as compatible with a Universe brimming with life as it is with one where we stand solitary under the stars. To insist there must be life out there, he reminds us, is to trade evidence for optimism. And I think we all know where I stand with optimism. Still, it's absolutely important for us to make the search. It's also important to make the distinction between ETL and ETT as I do, because the signs we're looking for are different for each (though of course if we find ETT, it almost certainly implies ETL; just not the other way around). Even under ideal conditions, life doesn’t simply spring forth; no experiment has succeeded in replicating it. No, but that may be because the process takes longer than your usual lab experiments. What experiments have been done suggest that it's possible. Whether it's inevitable or not, or simply very difficult, we don't know, due to small sample size. Earth’s unique circumstances — a stabilizing Moon, plate tectonics, and precisely the right chemical mix — might be one in a trillion. If it's only one in a trillion, then the universe harbors quite a bit of life. But it might also be one in a googolplex. Or it could be one in 2. We don't know. As I've said before, once you've won the lottery, the odds of having won the lottery are 1 in 1, and prior probabilities make no difference except to remind you how lucky you got. Evolution adds yet another filter: While microbial life could be common, the leap to intelligent beings may require an almost comical series of accidents and near-catastrophes. There's that word again. Intelligence. It's too slippery in this context, and too easy to make self-contradictory jokes about (if you can type "there's no intelligent life on Earth either haha" the simple act of typing that and sending it over a worldwide network of computers, using old and new technology developed by humans, proves your statement wrong). We share the world with several other beings that can be labeled intelligent: crows, elephants, cats, octopodes, etc. And we don't have the capability to communicate with them beyond a very basic level, and, as I noted above, none of them are trying to send radio signals. So what we're looking for, again, is technology. Okay, I've gone on long enough, though the article continues for a while. I'll just point out one other thing that the article touches on later: it's remarkably hard to prove a negative. If we find ETL, that's enough to show that ETL exists. If we don't find it, though, that doesn't mean none exists. Like I said, it's a big universe, and the maps have always shown fantastic creatures in unexplored places. And sometimes, we search those places, and find fantastic creatures—just not the ones we were expecting. |
Just digging back to April of 2021 today, a time that is simultaneously far in the past and only four years ago. I did an entry based on a Cracked article: "Real Men of Genius" ![]() The title, which I didn't explain then, comes from an old (90s-noughties) ad campaign for a product I despise, but whose promotions I found amusing. Yes, ads can be amusing. Maybe once a decade. The article itself ![]() Me: It's been said that with genius comes a certain level of insanity. I didn't clarify this then, but I don't really believe what's "been said" in this case. Humans can be weird; weirdness isn't necessarily insanity; and a famous person's weirdness tends to get amplified. The article doesn't mention mental illness; that was all my take, and I probably shouldn't have done it. That amplification seems to be the case with the first one, about Nabokov. Lots of people have sleep problems. Would his have been well-known if he hadn't been? As for the next item: George Patton claimed to be the reincarnation of several past soldiers. What I didn't acknowledge then, and probably should have, is that lots of people hold a belief in reincarnation. It's more common in Eastern cultures, but it's hardly unheard-of in the West. So, whether one believes Patton or not, he was hardly alone in his assertions. Not really anything else to add or subtract, except that I'm pretty sure some of your beliefs, habits, or activities, and mine, would get pegged as weirdness if we were famous. The title of the original article specified "famous smart people with little-known problems," but it's not like you could build an article around unknown people with little-known problems. |
A bit from Psyche that, as if I were a superconductor, I just couldn't resist: Why it’s possible to be optimistic in a world of bad news ![]() The original optimist, Leibniz, was mocked and misunderstood. Centuries later, his worldview can help us navigate modern life As far as I'm concerned, the only way to be optimistic in a world of bad news is to ignore the news entirely. Ignorance is bliss. No news is good news. Stay in your cave watching shadows flicker. Not clear to me: why optimism is supposed to be a good thing. As for Liebniz, well... he was, famously, a contemporary of Newton. They both invented calculus, mostly independently, and with different approaches. Newton, when he delved into philosophy, was into astrology and alchemy, things that have been thoroughly debunked. So just because someone discovers something about math or science doesn't mean their philosophy should be taken as established fact. See also: Pascal, Descartes, Fermi, etc. What does it mean to be optimistic? We usually think of optimism as an expectation that things will work out for the best. There are no happy endings. There are only stories that end prematurely. Nowhere is this better exemplified than in the formulaic fairy-tale ending: "And they lived happily ever after." Because, unless you take that metaphorically to mean that now they've achieved a sort of immortality since they've been recorded in a story, that's not how any of this works. In short, that definition of optimism is demonstrably and empirically wrong. While we might accept that such expectations have motivational value – making it easier to deal with the ups and downs of everyday life, and the struggle and strife we see in the world – we might still feel dubious about it from an intellectual perspective. Obviously, I do feel dubious about it from an intellectual perspective. Also, an emotional one. I do not, however, consider it to have motivational value. If I think things will somehow work out for the best, I have no motivation to work at it. If, however, I expect the worst, then I'm motivated to make things better. Not that I actually do, mind you. My engine's always running, but the gearshift's almost never engaged. Optimism is, after all, by its nature delusional; ‘realism’ or outright pessimism might seem more justifiable given the troubles of the present and the uncertainties of the future. "Seem?" No. It absolutely is. Those of us familiar with Voltaire’s celebrated novel Candide, or Optimism (1759) might be reminded of his character Dr Pangloss, and his refrain that all must be for the best ‘in this best of all possible worlds’. You know the real difference between an optimist and a pessimist? The optimist thinks we're living in the best of all possible worlds. The pessimist knows that we are. Pangloss, a professor of ‘metaphysico-theologico-cosmolo-nigology’, is a vicious caricature of the great German polymath Gottfried Wilhelm Leibniz, and his catchphrase is Voltaire’s snappy formulation of the German’s attempt to provide a logical argument for optimism. Well, that settles it: now I have to read Voltaire. Or, really, a theological argument. Leibniz didn’t set out to explain why some people are perpetually cheerful about their prospects, but why an all-powerful, all-seeing and all-loving God allows evil to exist in the world. So much time has been wasted on that question (known to philosophers as "theodicy.") So much ink, so many electrons, so much conversion of useful energy into useless energy, that the entropy generated by the efforts surely accelerated the inevitable heat death of the universe by centuries. And it has one really blindingly obvious solution: there is no all-powerful, all-seeing, and all-loving god. This ‘problem of evil’ has been debated for millennia, but it was Leibniz who first attempted to reason his way to an answer, rather than look to scripture for one. I'll give him a few Philosophy Points for that. His inspiration came from his realisation, in the early 1680s, that the path taken by light through a system of prisms or mirrors always followed the ‘easiest’, or ‘optimal’, path from source to destination. Which, scientifically, turns out not to be the case, but he couldn't have known that. Up until that point, it had generally been assumed that the cosmos was precisely the way it had to be... While there might be many possible ways to make a world, there’s only one optimal way. And this view of the world came to be known as optimism. And here's where I get to kick myself: for someone with such strong opinions about optimism, and such a keen interest in etymology, I never knew that or made the connection, but it does seem to be true (though Liebniz wasn't who coined the word). Leibniz’s view, put forward in his book Theodicy (1710), did not win instant acclaim. And yes, that's where that word comes from. Liebniz put together the Latin roots for "god" and "justice," and turned it French; it was later Anglicized. But was Leibniz on to something? Could his worldview help us regain a clearer sense of how to be optimistic in the present day? Even if the answer to that is "yes," which I'm not taking a stand on either way, I'm still not clear on why this would be desirable. In his day, the idea that the world could be arranged any differently was novel to the point of outlandishness. Over centuries, that gradually changed as it became clearer that the cosmos contains places that are nothing like Earth, and that our planet itself had been dramatically different in the past. But it shot to prominence in the middle of the 20th century, when both philosophy and physics converged on the idea that ours is only one of many possible universes – or, at least, that this is a useful way to think about certain problems in logic and quantum theory. That's a popular idea now, embodied in more than a few works of film and literature. The problem is, lots of people seem to accept it as scientific fact—which it is not. It is, as the article notes, "a useful way to think about certain problems," but that doesn't mean it reflects reality. This intellectual respectability has turned into cultural ubiquity: the idea that there are many possible worlds is intuitively appealing in a time when ever fewer people accept the idea of a divine plan. We are more likely to believe that the future is open, with many alternative paths we could take from today to tomorrow. And that may well be the case. Here's the problem, though: this article frames that philosophy in terms of the aforementioned theodicy. Humans, the story goes, have free will. The same story tells us that God is all-powerful, all-knowing, and all-merciful. But if God is all-knowing, then God already knows what we're going to do. In other words: ask yourself, "Can God be surprised?" An all-knowing entity couldn't possibly be surprised, any more than it could create a rock so heavy even that entity couldn't lift it. And if God already knows what we're going to do, then we can't have free will (my own argument against free will has nothing to do with divine powers; I'm just taking the theological tack for this discussion). Hence, the whole concept of Original Sin falls apart, and with it much of Western theology. In other words, God knew Eve and Adam would munch that apple (or whatever), so punishing them for doing so amounts to Original Entrapment. Once, we might have assumed that our fates lay in the lap of the gods, or that implacable physical laws dictate our every move. Today, we think the responsibility lies with us. The reality, I think, is somewhere in between. We are subject to physical laws. We are also agents of change. Arguing about which is real is reminiscent of the nature vs. nurture debate, the answer to which is "both." The key to making Leibniz’s version of optimism relevant to a secular, 21st-century worldview, is to make ‘the best of all possible worlds’ an aspiration, not a statement of belief. Oddly enough, I agree with that assessment. I just come at it from a different angle: as a hopeful pessimist, I think things can change for the better, but, absent well-meaning and well-thought-out human intervention, probably will not. This might sound abstract or fanciful. But in fact, as I detail in my book The Bright Side (2025)... Yes, yes, this whole article is a stealth ad for a book. I've repeatedly stated my philosophy on that. I know this entry delves deeper into theology than my rants usually do. It just seemed to be the proper reaction to the ideas presented. |
#proudboys As I've hinted in previous entries for JI, "pound" has several meanings. As a verb, it can mean to thump repeatedly, as in "go pound sand" or "his heart pounded in his chest as he ran from the angry mob." It can also mean to have dominant sex with, probably because of the similarity of that to the rhythmic thumping associated with the first verb version of "pound." That's not even getting into the nouns, the ones for weight or currency (related to weight) or the place where they keep stray dogs (probably unrelated to weight and maybe related to "impound"). So when you say "pound proudboys," it's unclear whether we're supposed to lock them up with the other stray dogs, bone them, or punch them repeatedly into submission (and then maybe bone them). I am, of course, not advocating for nonconsensual sexual relations; I'm strongly implying that, for them, it would be consensual. Not that I'd volunteer, mind you. Even if I did swing that way, which I don't, I could think of much better uses for my time and energy. Still, they might have better luck in getting what they want if they named themselves Pride Boys. And, hey, I'm not judging, or kink-shaming. That would be massively hypocritical, even for me. And it wouldn't be something to be proud of. |
I'm jumping the queue with this article from Slate. I feel like it touches on some things that might help us better frame national and world political discourse, and I didn't want to wait for blind chance to select it at random some indeterminate amount of time from now and probably in a different blog. Oddly enough, it all starts with a woman's dress from ten years ago. It’s Been 10 Years Since “The Dress” ![]() The viral image holds a lesson in why people disagree—and how we can learn to better understand each other. You know the one, unless you were living under a rock, have a memory even worse than my own, or are way younger than my usual readership demographic: a single photo of a dress, illustrated at the link, which some people saw as white and gold and others saw as blue and black. It was not a tranquil time. People argued with their friends about the very basics of reality. Spouses vehemently disagreed. Each and every person was on one side or the other side. It could be hard to imagine how anyone in their right mind could hold an opinion different from your own. That sounds like hyperbole now, but I'm pretty sure relationships ended over this thing. To recap: A cellphone picture of a wedding guest’s dress, uploaded to the internet, sharply divided people into those who saw it as white and gold and those who saw it in black and blue—even if they were viewing it together, on the very same computer or phone screen. And this wasn't your usual optical illusion, either. Normally, you can trick your eyes and/or brain into seeing through the illusion. The dancer moves clockwise, until you decide she should be moving counterclockwise. The plates are all upside-down until you stare long enough and they flip right-side-up. This square is darker than this other square, until you directly compare the colors used with each other, ignoring the other inputs. That sort of thing. This one involves color, too, but, as far as I know, once you saw the dress one way, no amount of convincing or trickery could make your brain flip to the other color scheme. I know that was true for me: how I saw it was how I saw it, and no trickery or convincing made me see it the other way. No, I'm not telling you what colors I saw. It's irrelevant. What is relevant is that I believed other people when they said they saw different colors (well, except for the trolls who insisted on, say, mauve and pink just to mess with the rest of us) and, being the curious type, I always wanted an explanation. I don't recall any from back then; this article, however, almost satisfies my curiosity on that front. The notorious dress, under natural lighting conditions, is unambiguously black and blue, for (almost) everyone who saw it in person, or in other photographs. It was just the one image, snapped by a mother of a bride and uploaded to Tumblr by one of her daughter’s friends, that caused so much disagreement. How can it be that there is such strong consensus about the colors of the actual dress, but such striking disagreements about its colors in this particular image? And no, it's not because everyone who sees it a different way is brain-damaged... which is a preview of the point made soon after in the article: While the colors of a piece of clothing might be a trivial thing to disagree about, we can all learn a thing or two from the dress about how to navigate high-stakes disagreements. And no, it's not just about how to argue or debate effectively. Why did people disagree about the dress? It’s all in the lighting. There's more of that explanation at the article; I don't see the need for, or wisdom in, reproducing all of it here. There are also some other examples of color perception differences. One thing that you might notice about all of these examples: Your brain never tells you “We really can’t tell what the color is because we don’t have all necessary information available.” There’s no flag that goes up saying “Just FYI, your assumptions did much of the heavy lifting here.” The brain prioritizes decisive perception (giving you the ability to take decisive action) over being paralyzed by uncertainty and doubt. A lesser author might have made up some evolutionary guess for why that is. Like "This is because our ancestors needed to act quickly when they thought they saw a tiger, instead of standing there wondering if it's a predator." I just made that up. It might be true. It likely is not. Sure, there's an evolutionary reason for it; what's guesswork is what that reason is and how far back it goes along the evolutionary tree. In any case, I appreciate the lack of made-up evo-psych "explanations." This might be all fun and games when applied to internet memes, but similar convictions—sincerely held and self-evidently true to the individual—in domains like religion or politics will also be determined in large part by differential priors. And that, simply put, is the metaphor that makes this article both useful and timely. The phrase "differential priors" isn't strictly defined here, but it can be inferred by the examples used: as I understand it, it refers to unconscious assumptions based on one's unique experience. Kind of like how someone growing up poor will have a much different relationship to money than someone who grew up rich. Rather than thinking people must just be plain wrong, or stupid, a better way might be to take the disagreement seriously and try to actively elicit and discern the differential priors that led to the diverging conclusions. I get the impression that this is easier said than done. But it may be necessary in a world where we're calling anyone who disagrees with us things like "stupid," "woke," "Nazi," or "evil." Between sincere and well-meaning parties, the very fact that the disagreement exists in the first place must be due to a difference in the priors that informed the formation of the conviction. The caveat there is "sincere and well-meaning parties." That must be discerned, too. And that's hard to do when you don't see your political opponents as sincere or well-meaning. I think internet trolling contributes to muddying the waters here, but trolling existed long before the internet. All that remains is to determine what those [differential priors] might be. And like I just said, that's work. I know I'm lazy. I've built my entire life around being lazy. But this is too important, the stakes are too high, to be intellectually lazy. So, next time someone says something I vehemently disagree with, I'm going to try to get a better handle on their point of view before dismissing them as an idiot. They might still be an idiot, but as idiots have the right to vote in my country, it may be wise to see things from an idiot's perspective. And it might turn out that they're just coming at the topic from a different angle, or in different lighting. For what it's worth, I never assumed the people who saw the dress differently were stupid and trying to destroy the country and/or world. And I should probably apply that to political disagreements as well—at least until I'm sure they're trying to destroy the country and/or world. |
#brain Not much I can say about "pound brain" except that, from what I hear, the brain itself doesn't contain pain receptors. So no matter how much you pound your brain, it's not the gray matter itself that hurts; it's the slightly harder stuff surrounding it. Why you'd pound your brain is another issue, but, I mean... *gestures at everything in general* I get it. My ex-wife had brain surgery. They took out a chunk of brain in an attempt to fix a neurological problem. This happened shortly before we separated. When I'm feeling charitable, I'll assume that the extracted bit was the bunch of neurons that made her fool herself into believing she liked being around me. When I'm not feeling charitable, I'll think that she had the surgery before the separation so that, in case something went wrong (as something sometimes does when someone goes digging around in a brain with surgical implements), I'd be on the hook to care for whatever was left of her. The reality, of course, is probably something else entirely. Which is only fair, because whatever we think we know about the brain itself, we're probably wrong and definitely operating from incomplete information. And that makes it marvelous that we can even consider poking around in there with scalpels and forceps and whatnot, let alone doing so with any success. For instance, there was this widespread belief that the two hemispheres of the brain ruled different aspects of one's personality. A "right-brained" person was, by this theory, creative, artistic, emotional, and so on. A "left-brained" person, in contrast, was considered logical, rational, methodical, etc. Because people don't like to change their brains, this model persists in the popular imagination, but as it turns out, it's about as accurate as phrenology or astrology. Sure, some people are more creative and others, more methodical, but it has nothing to do with dominant cerebral hemispheres. For another instance, there's a tendency to use metaphors to describe the brain's function. The latest involves comparing the brain to a computer, with different sections serving different functions like processing (CPU) or memory (hard drive). Before computers, it was fashionable to compare the brain to a machine; the image of cogs turning when someone is thinking persists in cartoons and whatnot. These are more reflections of the current state of technology than of reality. But, like the Bohr orbital model of an atom, they can serve their purpose even though they're inaccurate. So I say this: the brain is like a car. You don't have to know exactly how it works to use it, but a lot of people are really, really bad at doing so. |
From The Guardian, an article that I'm going to try to be skeptical about, because I already agree with it. The big idea: why it’s great to be an only child ![]() The notion that it’s bad to be brought up without siblings should be banished for good It won't be banished for good, though. People cling to their preconceptions. Hell, I know someone who was absolutely convinced that their cat would "steal the baby's breath." This was someone otherwise fairly rational. When I was growing up, only children were generally regarded as unfortunate souls; lonely, socially clumsy and often bullied. Well, being bullied can happen to anyone, but I imagine it would help to have a sibling on your side for defense and/or painful retribution. I'm not convinced, however, that it can make up for the bullying and/or annoyance of having said sibling in the first place. One can avoid bullies, much of the time; one cannot avoid one's sibling. But the stereotype has proved to be tenacious – so much so that many people still feel anxiety about the issue: parents over whether they have deprived their child of the experience of having siblings, only children that they may have missed out on a crucial part of their development. Such experience and development could, I will reiterate, go both ways. It can be positive. It can also be strongly negative. I'd imagine it would be worse to have a shitty sibling than none at all. And from my own experience, having none instilled in me a powerful sense of individuality, of not having to lean on anyone else. Based on current data, it’s estimated that by 2031 half of all UK families will be raising just one child. Obviously, this article focuses on the UK, and I don't know what the stats are for other countries. I'm not sure the exact details matter; such predictions are like weather forecasts, and shouldn't be taken as absolute truth. The author's point seems to be that being an "only" used to have stigma because the situation was rare; the situation isn't rare anymore, but the stigma remains, so she throws the numbers around to support the "not rare anymore" point. As a clinical psychologist with more than 40 years’ experience working with families and children, I’d like to reassure parents that having one child is now an excellent decision – and here are some of the reasons. As someone who lacks all sorts of credentials, I'd like to reassure people that having no children right now is an excellent decision. Have you seen the world? Can you honestly say they'd have a better life than you did? Can you really afford the luxury? First, a lot of the stigma, the source of so many difficult developmental experiences, has melted away because of the numbers game. It’s simply much less unusual to have no siblings, and less likely to draw unkind attention. I think the point here is that kids are mean to anyone who's different, but being an only isn't all that different anymore, so the hammers don't go after that particular nail. Second, the data that gave the stereotype of an ill-adjusted, unhappy only child a veneer of scientific credibility has been thoroughly debunked. Much of it was the result of a questionnaire that American psychologist EW Bohannon gave to 200 adults in the late 19th century. This is the bit I'd pay most attention to. Old study, single researcher, small and demographically narrow sample size. From this “study” – based on secondhand opinions, biased language, and without a control group – Bohannon concluded that only children were generally spoiled, selfish, intolerant and self-obsessed. I'm not intolerant, goddammit. More recent, better-designed investigations have, unsurprisingly, utterly failed to uphold these claims. Go figure. That isn’t to say that there aren’t any differences at all between single children and others. For example, recent research in China found that they are often more competitive and less tolerant of others; but they also tend to be better at lateral thinking and are more content spending time alone. Again, I contend that being able to be alone is a positive personality trait. It's good to not cling to others for validation or emotional support. Often people’s anxieties about single-child families are projected into the future. Isn’t it better to have siblings to share memories with in adulthood and who can lighten the load of caring for elderly parents should that become necessary? I'm always seeing instances where one of the parents' many kids is the primary caregiver in those situations. Hell, it happened to the friend of mine that I had to convince about the cat thing up there. I still maintain that having kids so you'll have someone to take care of you in old age is one of the biggest acts of selfishness. It’s true that I’ve worked with a number of only children who complain of exhaustion as they care for their parents in later life. But I would counter that by noting that the worst relationship issues I’ve had to deal with in my clinics are not those between couples, but among adult siblings when it comes to sharing out responsibility for the care of their parents, and who’s entitled to what once they die. When parents are even moderately rich, all the lessons they supposedly taught their kids about sharing and cooperating apparently go right out the window. The common metaphor is that of vultures circling a dying animal, but that doesn't really happen and it's not fair to the noble vulture to compare them to selfish brat offspring. Finally, are parents who have large families happier than those who have just one child? Apparently not. Ugh. "Happier." I've ragged on this concept before, I know, but I don't think it should be relevant. Part of this is because people are, believe it or not, different. It could be that a couple wants a large number of offspring, and they might be happier. It could be that a couple wants none, and they'd be unhappy with even one, let alone more. Happiness is notoriously subjective, and someone might convince themselves they're happy just because otherwise, they'd have to change something, and change is painful. Or maybe they can't change, so they do the self-convincing. When it comes down to it, there are advantages and disadvantages, and any disadvantages for the child can be compensated for by skilful parenting. This is perhaps the key message for mothers or fathers worried about the issue: nothing is set in stone. In the case of only children, helping them learn to share, and prioritising flexibility – even allowing for some disorder – in day-to-day scheduling is extremely helpful, as these are some of the skills children with siblings acquire as a matter of course. And then you get stuff like this, which is clearly geared toward the neurotypical. So, this is an example of how I handle confirmation bias: don't just agree with the article; find things to criticize about it. Remember what I just said about happiness? I can't change having been an only child. I couldn't make siblings appear out of thin air (multiverse theory notwithstanding) even if I wanted to, which I don't. My unique situation is the hand I've been dealt; fortunately, it's full of aces. |
#tiktokgirls Today's title, which is pronounced "Pound TikTok Girls," might seem like it could push the boundaries of this item's Content Rating. What if I said that, in this case, they're asking TikTok Girls to volunteer at the pound? Or that, like it or not, "girl" can describe any female-presenting human or other animal, including adult specimens, as in "girlboss," "Golden Girls," or "cowgirls?" And my cats are good girls, even though they're seniors now. As in "old," not "about to graduate." So yeah, I'm not advocating violence on anyone underage. Well, unless they steal something from me, but that's not the case here. In any case, "pound" or otherwise, I'm not going to look this one up. I'm pretty sure doing so would put me on a List, and that's even though I could use a VPN, private browsing, adblock, scriptblock, etc. "They" will find a way to find me just for typing tiktokgirls into a search box. And, as you might imagine, what I said in the Exclaimer! above about Instagram goes double for DikDok. This has less to do with its ties to the Chinese government (it's not like Instagram's owner is any better) and more to do with just how vapid and insipid (both really fun words) all the KitKots that escape into the wild are, and their insistence on promoting the use of vertical video. My primary connection to the internet is via a laptop, which has a screen in landscape mode. And even when I'm using my Android, I prefer to hold it in that orientation on the few occasions when I watch videos. We've spent several generations training ourselves that landscape mode is the correct orientation for moving pictures, and suddenly the newer generation thinks they have the right to change that? No. They do not. "Oh, but that's just the style these days?" Yeah? So are Crocs; that doesn't mean I have to like it. Which, just to be perfectly clear, doesn't mean I want the government to ban either one. No, I want something far more difficult than that: that people decide, all on their own, that things like Crocs and vertical video are stupid-looking and shouldn't exist. Meanwhile, I'll continue to pretend that they don't. Consequently, while I continue to assert that there's no such thing as useless knowledge (especially for a writer) and that deliberately maintaining one's ignorance is an affront to Nature and humanity, I'm going to have to maintain my ignorance on this particular tag. It goes into the nearly-empty box in my brain labeled "Things I Don't Really Want To Know." It's in good company in there with "how does it feel to go through a wood chipper" and "what's really in the sausage I just had for breakfast." So, what the tag is about, and why it'll get a post banned from Instagram, will remain mysteries to me unless someone else wants to tell me. And that's fine; what's life without a little mystery? |
For my weekly trip to the past, this time, we're not going very deep at all. In November of 2023, I wrote this for a blogging activity which is no longer with us, hence the "invalid item" link therein: "Forgive Me, For I Have Zinned" ![]() The entry revolves around National Zinfandel Day, ![]() I did a whole blog entry on these calendar events, just a few days ago. In brief, yes, I know a lot of them are just product promotions. This one's no exception; the website lists it as being founded by "Zinfandel Advocates & Producers (ZAP)," which is totally the name I'd come up with if I were putting together an industry coalition for zinfandel. But then they had to go and make White Zinfandel, which is emblematic of everything that's wrong in the world. So many of those emblems these days. Now, to be somewhat fair, I've heard it's improved since the last time I had the misfortune of sipping it. Nor have I had any in the time since that entry. The first offense of white zinfandel is that it's actually a blush, or rosé. True enough, but I don't think I was clear that this is an offense because it breaks the rules of wine categorization. Some rule-breaking is fine and necessary for innovation. This sort of thing just confuses people. The second offense is that it's inoffensive. It's the wine equivalent of white bread, American cheese, and light beer: something seemingly crafted to appeal to the lowest common denominator, and I'm not low nor common nor a denominator. Bland, characterless, etc. Apparently, white zinfandel was invented on accident by Sutter Home (which I always called Stagger Home). Other wineries produce it now as well, but the point is, it's unsurprising that white zin is as much an American product as all those other mass-market foods and drinks. Plus, I forgot to add, fake milk "chocolate." And finally, the wine I tried when it was all the rage in the States was cloyingly sweet. (As I noted above, that may no longer be the case.) How can I call it characterless and cloyingly sweet at the same time? Because I can. Finally, "white" zinfandel tastes completely unlike the red variety, such that when I finally got around to tasting actual zinfandel, it was a real epiphany. I might actually like it better than Shiraz. Jury's still out on that. I can say for certain, though, that I prefer both of those over cabernet sauvignon. There was a bumper sticker floating around some time ago: "Absolve yourself of white zin." I haven't seen white zin in stores for a while. Maybe it's just because I haven't been looking, but hopefully, it's at least partly because tastes have improved. Given the continued presence of those other offensively inoffensive products, though, somehow I doubt it. |
#roosterapparatus There are things I've wondered about since I first encountered them, and that curiosity was so powerful that I researched it. This was harder in the old days, before the internet made such research simultaneously easier and less reliable. One of the things I wondered about pre-internet was why it is that a rooster crows, but a crow never roosters. But I didn't care enough to look it up. What I did try to find out, at some point, was why one could use either 'cock' or 'rooster' for the male poultry specimen. As it turned out, the latter word was apparently a US invention, attempting to avoid one of the more salacious meaning of 'cock.' Typical US. If the euphemism origin is true, it would be one of those relatively rare occasions when a euphemism doesn't eventually take on the same connotations as the word it replaced. That is truly weird, as that particular body part is of such paramount importance to the world that almost any word, and several gestures, can, depending on context, refer to it. One thing I have never attempted to find out, despite having lived on a farm, is how chickens reproduce. At least, apart from the obvious and clichéd chicken / egg cycle. I just didn't care, partly because we didn't have chickens and partly because I just didn't care. But you pick up things here and there, so I know more about the process than I really wanted to. All of which is to say that roosters don't really have an apparatus as we would identify it. As per the Exclaimer! up there, the tag in the title, apparently a smushing of rooster and apparatus, is banned on a certain social media platform, but it doesn't seem to be because of its reference to the nonexistent cock cock. No, a brief search revealed that the words refer to a business that sells artisanal glass products which may or may not be pressed into service for smoking the devil's cabbage. Whether the association with the natural substance which is still illegal in most places is the reason for the ban, or maybe they're just a shady company using legitimacy as a kind of smoke screen, or perhaps something else entirely, the name really is a good one. Not because of chickens or cocks, but because, as my Google search revealed, there aren't many other uses for the particular combination of "rooster" and "apparatus." Memorable and unique; good company name, though I'd have never guessed the product from the tag name. I guess maybe I was expecting it to refer to weather vanes, the clichéd depiction of which almost always involves a rooster. But no. Not dongs or schlongs, but bongs. |
Here's a great example of science imitating art. From Wired: Science Has Spun Spider-Man’s Web-Slinging Into Reality ![]() When a US research lab accidentally created a sticky, web-like substance, it turned to Peter Parker and comic-book lore for inspiration on what to do next. The original Spider-Man had Parker sciencing up his own web-shooters and web fluid, which was less a metaphor for puberty and more a comic book shortcut to giving him web powers that weren't inherently gross. Unrealistic? Sure, but so is everything else in comic books, including getting superpowers from being bitten by a radioactive spider, and that's okay. Slowly but surely, we are making good on the gadgets we imagined, as kids, that the future would hold. And yet, no phasers. The Starfleet tricorder from Star Trek? Almost there. But web-shooting? Web-slinging? That wasn't one we really thought would make the crossover. Yeah, I gotta agree on that one. And it wasn't exactly in the plans for the scientist who has made the strong, sticky, air-spun web a reality either, Marco Lo Presti, from Tufts University’s Silklab. Okay, but Silklab definitely sounds like a superhero hangout. Or maybe a supervillain lair. A very smooth one. Fio is Fiorenzo Omenetto, professor of engineering at Tufts and “puppeteer” of the Silklab. Oh, definitely supervillain lair. “You explore and you play and you sort of connect the dots. Part of the play that is very underestimated is where you say ‘hey, wait a second, is this like a Spider-Man thing?’ And you brush it off at first, but a material that mimics superpowers is always a very, very good thing.” What? No! It's a very, very bad thing in the hands of supervillains. A lot of the Silklab’s work is “bio-inspired” by spiders and silkworms, mussels and barnacles, velvet worm slime, even tropical orchids—so working out whether this sticky web could become something useful might seem like an easy side-step for the team. Velvet Worm Slime definitely needs to be the name of a band. Probably an emo/punk/EDM one. In Stan Lee and Steve Ditko’s original 1960s comic books, starting with Amazing Fantasy #15, Peter Parker builds a “little device,” one fastened to each wrist and triggered by finger pressure, to produce strands of ejectable ‘spider webs.’ By the time of the mid-2000s Sam Raimi Spider-Man films, the web-shooting switched from a wrist-worn spinneret gadget to an organic part of his superhero transformation. And we've never let Raimi hear the end of puberty jokes since. The article describes the web-fluid development in more detail; of course, despite the hype, we're not going to get friendly neighborhood web-slingers anytime soon, if at all. So, Spider-Man capabilities when? “Everybody wants to know if we're going to be able to swing from buildings,” says Omenetto with a wry smile. But we're not there yet—so far the Silklab team itself has speculated about some potential uses for the material: the retrieval of an object that’s lost underwater, perhaps, or a drone that captures something in a remote environment. When I was a kid, there was this sticky goo that would stretch but hold together, and you could use it to pick up pennies off the sidewalk, at least until it got too dirty to be useful. This doesn't seem much more useful than that goo, at least not yet. Lo Presti is interested to hear from anyone who has read his paper and thinks they might be in need of a web-shooting silk fiber. "Hello, I am an aspiring supervillain and I am in need of a web-shooting silk fiber to achieve my plans of world domination." Some humans are pretty clever, though, and I'll bet they'll find uses for this that don't involve costumed vigilantes swinging from skyscrapers. Still, I do appreciate the comic-book theme here. |
#imfine Imfine would make a great character name, don't you think? It's perfect for science fiction and/or fantasy, with maybe a little comedy thrown in. You could have someone named Imfine Owyudoon, for example. "It's pronounced 'ihm-FEEN,' they might insist, much as Young Frankenstein insisted on Fronk-in-steen. Or maybe it's French: "eem-feen-ay." And it has, as my use of the third person pronoun suggests, the advantage of not being obviously one gender. Sure, it might be close to Imogene, with the above pronunciation, but so what? You can use it for characters of any gender, which is helpful for keeping readers guessing or planning a plot twist. Other possibilities: Imfine Andyu Imfine Thanxforaxin Imfine Reilly Imfine Eyeswere Imfine No'Imnot So I went ahead and did a search for the word, to see if someone else has already had my idea (as per 99.99% of the time), and, true to the shithole Google has become, the first result was a shopping site with that exact name. I won't promote unfettered consumerism or provide free advertising by linking it here; all I'll say is that it looks like the shop has an Instagram presence, which I'd imagine is somewhat tricky (you have to read the Exclaimer! dropnote above to understand why). Then there's something called the I'm Fine Project, or imfineproject, "sculpting mental health awareness through art." Look, I'm not going to rag on that; if it helps, it helps (though I have no idea whether it does). But I will say that their description includes the line, "At workshops participants use clay to create a mask..." and that's how I know that, no matter what mental health problems I might have, it wouldn't be for me. Attempting to create "art," unless you count writing as an art, always ends in the same way for me: frustration, increased agitation, and throwing the failed attempt across the room or ripping it up into the trash. In other words, it would make everything worse. Speaking of artists, according to the search there's also apparently a musical artist called imfine, also somehow with an Instagram presence, so as usual, someone beat me to the naming idea. |
Another one from Mental Floss today, because my random number generator likes to mess with me. But first, a quick announcement and stealth plug: Starting tomorrow, and running through March 20, Elisa, Stik of Clubs ![]() ![]() But for today, on to the article: To be clear, sometimes—perhaps even often—experiments fail. If they didn't, they wouldn't be experiments. But sometimes, they fail so completely that it makes you wonder whether the cliché should be "curiosity killed the human." I should initiate the Hughes Award for such experiments. "Mad" Mike Hughes was a flat-Earther who designed and built a steam-powered rocket to prove to himself that our planet is flat. I've written about him before. The rocket, somewhat predictably, exploded with him inside (and even if it hadn't, it wasn't designed to get high enough to rule out the reality that the Earth is roughly spherical). You might say, "Well, there's already the Darwin Awards to cover that sort of thing," but the Darwin Awards only consider a subset of Stupid Human Tricks. Not all of the featured failures are quite that spectacular or, some might say, tragic. Out of the 14 in the article, I'll just highlight a few here. 1. Winthrop Kellogg's Ape Experiment In the early 1930s, comparative psychologist Winthrop Kellogg and his wife welcomed a healthy baby boy they named Donald. No, not that Donald. Or that one, either. Also, not that Kellogg. The psychologist had grown interested in those stories of children who were raised feral—but he didn’t send Donald to be raised by wolves. He did the opposite: He managed to get his hands on a similar-aged baby chimp named Gua and raised her alongside Donald. On the surface, considering the state of knowledge in the 1930s (DNA hadn't been invented yet, for example, but it was known that humans and chimpanzees were closely related on an evolutionary scale), this was a perfectly reasonable experiment—provided one ignores the ethical considerations involved in, for starters, separating a baby chimp from her tribe. As the article notes, the experiment didn't quite pan out (that's a pun, and it's very much intentional). 2. The Stanford Prison Experiment You may have heard about the Stanford Prison Experiment, a social psychology study gone awry in 1971. The point of the experiment, which was funded by the U.S. Office of Naval Research, was to measure the effect of role-playing and social expectations. Lead researcher Philip Zimbardo had predicted that situations and circumstances dictate how a person acts, not their personalities. I can't help but note the similarities between this one and the chimp one, because both were about innate traits vs. environmental conditioning. Most people have heard of this one, but there's a lot of misinformation about it out there, and I suspect that it functions as kind of a test of a person's preconceived notions, much as the novel Lord of the Flies does. If you haven't heard about it, the article goes into more detail—but I wouldn't fully trust Mental Floss to get it right. 3. Franz Reichelt's Aviator Suit Now, this one would be a retroactive candidate for the Hughes Award. In the early 1900s, Reichelt crafted a parachute from 320 square feet of fabric, all of which folded up into a wearable aviator suit. He had conducted several parachute tests using dummies, which all failed. He pinned the blame on the buildings, saying that they simply weren’t tall enough. I think we can all see where this is going, but, again, our state of knowledge in the early 1900s was even more incomplete than it is now. Hell, the first powered flight was late 1903. In 1912, Reichelt planned to test his latest version by flinging a dummy from the Eiffel Tower. But when he arrived at the famous landmark, the inventor surprised the waiting crowd by strapping on the parachute suit himself and taking the leap. Hence, the Hughes Award. The parachute didn’t open, however, and Reichelt became a victim of his own invention. (An autopsy reportedly determined that he died of a heart attack on the way down.) So, the next time someone tells you "It's not the fall that kills you; it's the sudden stop at the end," remember Monsieur Reichelt and how the fall actually did kill him. But mostly I'm including this one in my commentary to note that, from then on, the famous Paris landmark would be known as the I-Fell Tower. 5. William Perkin's Mauve-lous Mistake He had unwittingly discovered a way to produce mauve. The color was a smash hit, especially after Queen Victoria donned it for her daughter’s 1858 wedding. I'm leaving this one here to illustrate that sometimes failures are actually successes in disguise. Not the parachute guy, obviously, though we learned from that, too. 7. The Cleveland Indians' 10-Cent Beer Night In 1974, the Cleveland Indians tinkered with a new promotion to increase game attendance—giving fans the opportunity to purchase an unlimited amount of beer for 10 cents a cup, which wasn't the best idea. Since the article won't do it, I will: 10 cents in 1974 is roughly equivalent to 65 cents in early 2025. Which is about 1/10th of what they sometimes charge for good beer at a drafthouse, but I have no idea what they charge for watered-down swill "beer" at a ball game. At any rate, I think it's pretty obvious why selling cheap beer in a stadium full of already hyped-up sports fans is a Bad Idea. 10. The New Ball The basketball has been tweaked here and there over the years, but the modifications apparently went too far when the NBA experimented with a microfiber ball in 2006. “The New Ball,” as it was commonly known, was cheaper to make and was supposed to have the feel of a broken-in basketball right from the start. You know, I don't see why this even belongs on the list. Sure, it was an experiment of sorts. Sure, it failed. But no one died or even got seriously injured (the article notes cuts on some players' hands, but that's about it), and then they reverted back to the old basketball. Feeling deflated, the NBA officially announced they were pulling the ball from play on December 11, 2006—less than three months after its debut in a game. So I'm mostly including this one to show that I'm not the only one who makes terrible puns. 12. Wilhelm Reich's Cloudbusters Psychoanalyst Wilhelm Reich managed to draw a straight line from human orgasms to the weather to alien invasion. Influenced by Sigmund Freud’s work on the human libido, Reich extended the concept to propose a kind of widespread energy he called orgone. To give you an idea of how scientifically sound Reich’s concept was, orgone has been compared to the Force in Star Wars. And this one was less a failure of experiment than it was a failure of theory and critical thinking. 14. New Coke April 23, 1985, was a day that will live in marketing infamy. And that’s how Coke describes the failed experiment that was New Coke. On that day, the Coca-Cola Company debuted a new version of their popular soft drink made from a new and supposedly improved formula. Ah, yes: the Great Coke Crisis of 1985, the year I switched to Dr. Pepper (because Pepsi tastes like ass, and RC Cola isn't as widely available as the Big Two colas) and didn't switch back again until around the turn of the century. The message was received loud and clear. Coke announced the return of Old Coke in July, dubbing it Coca-Cola Classic—and they never experimented with the formula again. And that's not completely true. They switched from sugar to HFCS, which some say is the exact same thing and others disagree, but regardless, it's a change in formula. I also think Diet Coke benefited from whatever they learned doing New Coke. Most importantly, though, they did muck about with the formula after that, but this time were smart enough to sell it separately instead of replacing the Real Thing. This eventually led to the development of Coke Zero, which is the greatest invention since the Skip Intro button. Like I said, experiments often fail. If we're smart, we don't die of a heart attack in the process, and can actually learn and grow from the failures. Some of us even have the capacity to learn from the failures of others, though I think "jumping off the I-Fell tower using unproven and questionable gear" isn't something most of us have to be warned against. |
This article from Mental Floss comes to us from 2020. I doubt there have been any further notable events in the subject's history since then. Despite the advanced age of the article, I only ran across it in the last week or so, during which time I completely forgot why I felt the article was important enough to feature here. Or maybe I saved it just to add further random chaos to the world, which is sometimes why I do things. During the Seven Years War of the mid-1700s, a French army pharmacist named Antoine-Augustin Parmentier was captured by Prussian soldiers. Ah, yes, back when there was Russia and P-Russia. As a prisoner of war, he was forced to live on rations of potatoes. Oh no. Clearly, this was before the Geneva Convention. In mid-18th century France, this would practically qualify as cruel and unusual punishment: potatoes were thought of as feed for livestock, and they were believed to cause leprosy in humans. This is coming from people whose national cuisine consists of ground-up pig asshole, snails, and frog legs. The fear was so widespread that the French passed a law against them in 1748. Pomme de Terre Prohibition! But as Parmentier discovered in prison, potatoes weren’t deadly. In fact, they were pretty tasty. See, this is what I don't get, though I could probably look it up from a better source: raw potatoes are disgusting. And why would they take the time and resources to cook the things for prison chow? The story of mashed potatoes takes 10,000 years and traverses the mountains of Peru and the Irish countryside; it features cameos from Thomas Jefferson and a food scientist who helped invent a ubiquitous snack food. Hm. Maybe it was the Jefferson reference that made me save the link. Potatoes aren’t native to Ireland—or anywhere in Europe, for that matter. Count on Mental Floss for helpful and vaguely racist information. These early potatoes were very different from the potatoes we know today. Yeah, for instance, they didn't come in a sleeve from McDonald's. They were also slightly poisonous. They're nightshades, like tomatoes, which Europeans also thought were poisonous. To combat this toxicity, wild relatives of the llama would lick clay before eating them. The toxins in the potatoes would stick to the clay particles, allowing the animals to consume them safely. People in the Andes noticed this and started dunking their potatoes in a mixture of clay and water—not the most appetizing gravy, perhaps, but an ingenious solution to their potato problem. Oh... no, it was this bit. Yeah. That seems awfully specific, and a brief search didn't turn up any corroboration. Truth, or legend? I know I've often wondered about poisonous foods that got eaten anyway because the humans around them figured out how to neutralize the poisons. Pretty sure I've mentioned some of them in here before. How did they figure it out? Some by watching animals, I'm sure. Others? No clue. But during times of hardship, when easier food sources may not be available, I can totally see humans figuring this stuff out, because we're clever and hungry. By the time Spanish explorers brought the first potatoes to Europe from South America in the 16th century, they had been bred into a fully edible plant. That sentence glosses over quite a bit of savagery on the Spanish side. So that's potatoes, and the article says quite a bit more about them. But it's supposed to be specifically about mashed potatoes. In her 18th-century recipe book The Art of Cookery, English author Hannah Glasse instructed readers to boil potatoes, peel them, put them into a saucepan, and mash them well with milk, butter, and a little salt. Whether she innovated the mashing part or someone else had figured it out, that seems to be when the true origin of the mashed potato begins. In the United States, Mary Randolph published a recipe for mashed potatoes in her book, The Virginia Housewife, that called for half an ounce of butter and a tablespoon of milk for a pound of potatoes. She was related by marriage to Jefferson. Was that the only connection? But no country embraced the potato like Ireland. And yet, they didn't invent vodka. I'm skipping over a bit, here. In the 1950s, researchers at what is today called the Eastern Regional Research Center, a United States Department of Agriculture facility outside of Philadelphia, developed a new method for dehydrating potatoes that led to potato flakes that could be quickly rehydrated at home. Soon after, modern instant mashed potatoes were born. This is going to send crowds after me with tiki torches and pitchforks, but I like instant mashed potatoes. Well, there's more at the article, including another really oblique reference to Jefferson. And if you search, you can probably find more information on YouTuber. |
I believe in coincidence. That is, when two or more seemingly unrelated events appear to converge in a manner meaningful in some way to me or other humans, it's not because someone somehow steered the results, but because of pure coincidence. You know what would make me consider actually believing in the supernatural? If there were never any coincidences. Random chance will occasionally put two or more factors in close proximity, like when, occasionally, a cloud will cover the Sun and Moon during a total solar eclipse. (The eclipse itself is a giant cosmic coincidence, what with the Moon and Sun appearing to be about the same size in the sky.) If no coincidences happened, well, that would require a Vast Cosmic Intelligence to avoid them. Coincidences are simply the occasional ordinary workings of random numbers and/or chaos. But sometimes, I'll run across a coincidence that stretches all credulity, that is so utterly appropriate as to make me gape in speechless awe at the sheer metaphysical metaphor (or metaphorical metafizz) of it all. Well, today's coincidence is not quite on that level, but almost. For complete background, first you have to know that my link queue is, as of this morning, 48 items long, and each of the 48 items have the same chance of being selected at random. A 1 in 48 chance, to be precise. So, roughly, today's article, from the BBC, being the only one in the queue on this subject, had only about a 2% probability of being selected today. And today is the day when I (unless I fall over dead before I get to it) reach a nice round number milestone 2000-day streak on Duolingo. See the connection? It's only meaningful if one attaches significance to numbers with lots of zeros in them. But, let's be real here, most of us note such round numbers as special. But enough about that. Time to take a look at the actual article. I'm standing in line at my local bakery in Paris, apologising to an incredibly confused shopkeeper. He's just asked how many pastries I would like, and completely inadvertently, I responded in Mandarin instead of French. Ah, the age-old tradition of the humblebrag. I'm equally baffled: I'm a dominant English speaker, and haven't used Mandarin properly in years. Pretty sure that's one language I know I'll never learn. Multilinguals commonly juggle the languages they know with ease. But sometimes, accidental slip-ups can occur. I want to be clear, here: I don't consider myself multilingual. I can understand a good bit of written French. Je peux écrire des mots en français. I can't pronounce it well enough to be understood, and I can't follow spoken French well enough to understand most of it. In other words, I'm not fluent. So, any communication in French, I have to translate to English in my head, then compose a sentence in English and translate it into French. In doing so, I make mistakes, English slips in, and it becomes a kind of creole that even my New Orleans-born father would have cringed at. So I'm guessing that such mistakes become rarer as one gains fluency, but I don't know for sure. Research into how multilingual people juggle more than one language in their minds is complex and sometimes counterintuitive. I hope it's counterintuitive. That's one reason we do science: to rise above mere intuition and "common sense." "From research we know that as a bilingual or multilingual, whenever you're speaking, both languages or all the languages that you know are activated," says Mathieu Declerck, a senior research fellow at the Vrije Universiteit in Brussels. I shouldn't make assumptions about people based on their names or where they're from, but I'll point out that the official languages of Belgium include Dutch, French, and German, and English is also widely used; when I was there, I saw signs and heard speech in all four languages (sometimes a sentence would switch easily between them), and more—though I'll be the first to admit that, on hearing them, I'm not sure I could reliably tell Dutch from German. My point is only that if anyone can hold a claim to knowing about multilingualism, it would be Belgium. Or Switzerland. But I'm not visiting Switzerland; it's where my ex-wife lives. Yes, I know it's a relatively large country; shut up. "For example, when you want to say 'dog' as a French-English bilingual, not just 'dog' is activated, but also its translation equivalent, so 'chien' is also activated." Those words also come from different sources. What I don't understand, and can't be arsed to look up now, is why Spanish, linguistically related to French, uses a completely different word, 'perro'. Declerck himself is no stranger to accidentally mixing up languages. The Belgian native's impressive language repertoire includes Dutch, English, German and French. And what did I just say? Yeah, sometimes assuming doesn't make an ass out of u and ming. "The first part was in German and I'd step on a Belgian train where the second part was in French," he says. "And then when you pass Brussels, they change the language to Dutch, which is my native language. So in that span of like three hours, every time the conductor came over, I had to switch languages. Which sounds impressive, but remember, I navigated the Belgian rail system while knowing maybe four Dutch words, and one of them is "clock." Okay, "klok." Train announcements are verbal, though. The article moves on to describe some experiments that study this code-switching and its associated errors, and it's very interesting to me, but not a lot of point in quoting from it. Just one thing from the middle of that section: "The brain is malleable and adaptable," says Kristina Kasparian, a writer, translator and consultant who studied neurolinguistics at McGill University in Montreal, Canada. "When you're immersed in a second language, it does impact the way you perceive and process your native language." What I need, after 2000 days on Duolingo (a streak, I'll reiterate, even longer than my current daily blogging streak), is to find a French speaker as a coach. I mean, I've needed to do that for some time. I couldn't do it in France, because, in general, French people have no patience for that merde. Navigating such interference could perhaps be part of what makes it hard for an adult to learn a new language, especially if they've grown up monolingual. And make no mistake: it is hard. Not impossible, as some claim; that I have had any success at all disproves that. But I'm sure I'd have learned French much faster in my youth, which was wasted learning Hebrew, Latin, and computer programming (all of which I've forgotten all but the very basics of) (pun intended). Some studies have shown bilinguals perform better on executive control tasks, for example in activities when participants have to focus on counterintuitive information. I have no idea if my language learning has helped with that. Speaking multiple languages has also been linked to delayed onset of dementia symptoms. I don't wish to die, but it would be preferable to dementia. And of course, multilingualism brings many obvious benefits beyond the brain, not least the social benefit of being able to speak to many people. These days, smartphones can assist with translation. I witnessed people using them in Europe. They're nowhere near perfect, but I'm sure they do in a pinch. Not only can you get verbal translations, but also text translations. All great technology, but no substitute for learning, in my opinion. And so, I continue to learn. |
Back in November of 2019, as part of a round of 30 Day Blogging Challenge, I answered the call of a prompt ("Besides music, what are some of your favorite sounds?") with a loud silence: "Hush" ![]() When I dig into the past on these weekly adventures, I try not to repeat myself. This one came up at random, today, and I got the feeling that I'd Revisited it before. But, searching around, I didn't find any evidence that I'd done that. Perhaps I'd simply come across this one in another search. When a blog spans 18 years (albeit with a long hiatus in there) and nearly three thousand entries, I suspect it would tax anyone's memory, and mine more than most. As for the entry itself, it's short and contains no external quotes. A while back, I vaguely recall, there was a 30DBC prompt that asked the old question: would you rather be deaf or blind? And I said something like, I despise 75% of all sounds, but the other 25% is music, and I wouldn't want to live without music. While true, I do prefer to be able to hear. If someone has the dedication to swing back and look at what I actually wrote, and finds that it's something different from that, and calls me on it, well, congratulations. This is a universal bit of sarcasm. I don't doubt that I've contradicted myself before, or remembered different details. I don't really mind the little sounds that accompany everyday existence: the hooting birds, the rustling leaves, that sort of thing, but I can't say they're my favorite sounds. Bird chirping, especially, can really get on my nerves. At first, I thought Silent Spring was aspirational. I've been known to reject potential romantic partners if they're the kind of people who leave the TV on all day for "background noise." Seems like this is less an issue now, with more people doing deliberate streaming. I also have no problem with (most) music being used as background noise. At this point, though, and even back when I wrote this, I was done with the whole "romantic partner" nonsense. Not to mention that a non-trivial reason why I never wanted kids is because children noises make me meshuggah. It wasn't the deciding factor (that was, well, look around), but it was definitely on my list. If I can't listen to music, I prefer silence, or as close to it as I can get. Still true. Not that I'd want to be deaf; not just because of music but because I like to have some advance warning that someone is trying to sneak up on me - less likely to have such warning if there were a lot of background noise. No one's successfully snuck up on me in over 20 years, so this must be working. So, between yesterday's prompt and today's, I suppose I've been outed as someone who prefers both silence and darkness. Hello darkness, my old friend. |
This one, from Big Think, is one of those articles that expresses what I've been thinking about for a long time, but haven't found the words for. How glorifying ignorance leads to science illiteracy ![]() If we wish to tackle the very real problems society faces, we require expert-level knowledge. Valuing it starts earlier than we realize. I know something like this has been said before, but if you want someone to fly the airplane that you're in, you find a trained, licensed pilot, not someone who's read a book on birds and thus thinks they know all about flight. All across the country, you can see how the seeds of it develop from a very young age. When children raise their hands in class because they know the answer, their classmates hurl the familiar insults of “nerd,” “geek,” “dork,” or “know-it-all” at them. And yet, people who know things like who lost the AFC championship in the 1989 American football season are valued. Everyone's a nerd about something. It’s a version of the social effect known as tall poppy syndrome: where if someone dares to stand out, intellectually in this case, the response of the masses is to attempt to cut them down. Human life, especially kid life, can be viewed as a tension between wanting to fit in and wanting to stand out. Someone who knows more, is more successful, or who seems to be smarter than you is often seen as a threat, and so in order to prevent them from standing out too much (or surpassing too many others), we glorify ignorance as the de facto normal position. I've said this before, too, but the truth is, ignorance is the default position. We all start out ignorant, and there's so much knowledge out there that we can't help but remain ignorant of all but a few things for our entire lives. But, in my view, what we should be glorifying is not the default position, but the desire to get just a little less ignorant. And I should make this clear: ignorance isn't the same thing as willful ignorance. Choosing to remain ignorant harms your development as a child, but leads to science illiteracy, which harms the entire world. For instance, my limited knowledge of English tells me that there should have been a "not only" in that sentence. That, or change the conjunction from "but" to "and." I get it. I make editing mistakes, too. I find them in previous blog entries, from time to time. There's probably some in here I didn't catch. But I always strive to do better. There are so many remarkable things that we — as a species — have figured out about existence. I cannot argue with this, but there's much left to learn. We know what life is: how to identify it, how it evolves, what the mechanisms and molecules are that underpin it, and how it came to survive and flourish here on Earth. I could get picky about that assertion, and it's a prime example of what I just said. For example, while we have some really good hypotheses about how life started, we haven't quite figured that out to a high degree of certainty. We know what reality is made of on a fundamental level, from the smallest subatomic particles to the nature of space and time that encompasses the entire Universe. I'm not sure this is entirely correct. But, again, we know more now than we did even 100 years ago. Our most valuable explorations of the world and Universe around us have been scientific ones: where we learn about reality by asking it the right questions about itself, and by listening to the answers that are revealed to us through experiment, observation, and careful measurement. And, yes, sometimes it turns out the previous answers were wrong or incomplete, and get replaced by new answers. This is still a better system than the old one, which declared that something was so and brooked no argument or counterexamples. Those, it turns out, are almost always wrong. It’s impossible, in this day and age, for any single individual or entity to be an expert in all possible things. I would go so far as to say that this has always been impossible, but now, we have a better understanding of just how impossible it is. Even as a child, you know when the adults are lying to you, to themselves, and to everyone else in the room. As Mike Brock just wrote recently, it’s the “capacity to think clearly about reality itself” that must be our most unbreakable trait as individuals, particularly when there’s pressure — peer pressure, social pressure, political pressure, etc. — to surrender that capacity over to whatever some arbitrary authority figure says. But here's at least part of the problem, as I see it: Since I don't know everything, cannot know everything, I have to rely on experts and authorities in whatever subject. If I'm going to court, that would be an attorney. If I'm curious about the function of interatomic bonds in a solid, it would be a physicist. If I want to know who lost the AFC championship in the 1989 American football season, that would be a sports nerd. If I want the plane to fly, it would be a pilot. So, some amount of trust is necessary. One way to know who to trust is by credentials, for which one also must trust the credentialing authority (if the pilot, for example, got their license from Bubba's Lurn 2 Flie in rural Nevada, no thanks, I'll walk). Despite whatever your initial intuition might have been about an issue, you must always — every time you acquire new, valid information — re-evaluate your expectations in light of the new evidence. This is only a possibility if you can admit, to yourself, “I may have been wrong, and learning this new information is essential in getting it right in the end.” The willfully ignorant don't take this approach. They're Right, always, and to admit that they weren't would mean others might think they're weak or wishy-washy. You see this all the time. I bet you can come up with at least one example right off the top of your head. (No, I don't mean me, though I do need to fight against this tendency like most people.) Changing one's mind in the face of new evidence is true strength. Changing one's mind without substantial new evidence, now, I can see how that could be perceived as being flighty. There’s a reason why admitting, “I was wrong” is so difficult for so many of us, and why it is rarely an innate talent for humans to have, rather than a skill we must acquire. All of the solutions that require learning, incorporating new information, changing our minds, or re-evaluating our prior positions in the face of new evidence have something in common: they take effort. Yes, well, at least it's not physical effort, so I can do these things and still admit I'm lazy. Glorified underachieving, proclaiming falsehoods as truths, and the derision of actual knowledge are banes on our society. The world is made objectively worse by every anti-science element present within it. This may seem like an assertion without evidence, hanging there in quote form as it is, but I think reading the actual article provides the logical framework to back it up. Or, you know. I could be wrong. The Doctor: Ignorance is… um, what’s the opposite of bliss? Clara: Carlisle? The Doctor: Yes! Yes, ignorance is Carlisle. (If you don't know why that's funny, you can look it up. Then it won't be funny anymore, but at least you'll know a bit more.) |