Sparks

  • Been there, want more

    Travel has become so industrial – for both the traveller and the caterers. Travelers are being “influenced” to go to specific places which then get overrun by crowds often threatening the very thing they are great at. Caterers at the same time find creative ways to funnel large numbers of people on conveyers so they can process the scale of the invasion. Authenticity is lost and the true nature of the destination is never conveyed or experienced. 

  • Is the apple of my eye an orange?

    Revised: In a way how our eyes see and how our brain synthesizes the view is comparable to the construction of reality from flittering, ephemeral matter (“ vacuum fluctuations“).

    This piece was refined in collaboration with Claude.

    I am at heart a philosopher and ponder on science as much as almost anything else but within that scope, I am not a deeply mathematical or scientific mind. I mingle and craze out of my own curiosity. I hope to engender the same curiosity in the reader.

  • We can’t handle the truth

    We are always trying to patch the symptoms but gleefully avoiding the root cause. That seems to be the modern MO. 

    To stem the drug problem, we target the growers, dealers and suppliers. Why not target the demand?

    To stop poaching, wild animal trafficking and illegal arts trade, we target the hunters and thieves. Why not target the demand? 

    To stop child and human trafficking, we target kidnappers, cartels and other in-between handlers. Why not bring down the depraved elements creating the demand for trafficked humans in the first place. 

    This story goes on and on. 

    My hypotheses? Sure there are small scale (in terms of money) downstream demands but the majority of the funding comes from the wealth-adjacent. They are taboo to target. 

  • It’s just Tuesday

    I had a thought. I discussed this observation and perspective with Claude and it really made my thought more streamlined and clarified ambiguity. What do you think?

    Claude’s elevated version:

    When we imitate nature and find superior results—whether copying a bird’s wing or harnessing fusion like the sun—this shouldn’t surprise us about nature’s accomplishments. It should recalibrate our perspective. Nature’s achievements aren’t objectively extraordinary; they’re simply normal operations of physical reality, while our baseline expectations remain inadequately calibrated. What we label as “phenomenal” reveals more about the poverty of our initial assumptions than about nature itself. Our tendency to elevate the unexplained or incomprehensible to “remarkable” status represents a perspectival error: these processes are mundane to the universe, appearing exceptional only because we don’t yet understand them. The wonder we feel is an artifact of our limited vantage point, not a property of the phenomena themselves.

    My original statements:

    In the same way that imitating nature and finding that the result is better, we should not be surprised by the great things that nature has already achieved. It’s not that the thing that exists is amazing, it’s actually just natural but our expectations are too basic. What we don’t understand or can’t explain is instantly elevated to amazing even though it is passé to the universe. 

    Boiled down to: It’s our perspective that exaggerates our appreciation of all things normal as something phenomenal.

    Claude: “To the universe, fusion isn’t amazing. It’s just Tuesday.”

  • Who’s Got the Power?

    Preface: This is abstract and doesn’t make a lot of the necessary detailed connections – just coarse level connections.

     It is clear that the ultimate expression and source of power, wealth, and privilege is the electron. All of the elements necessary to fulfill humanity’s known desires are abundantly available. With electrons and these elements, we can transform air into water and lead into gold. The mechanisms needed to direct electrons into particular forms are still in their infancy, but they are rapidly developing. Those mechanisms are currently controlled by those who possess today’s dominant currency — money.

    The elements themselves are plentiful. What matters is the ability to manipulate them. At some point, control over electrons — the fundamental actuators of change — and the means to direct them may translate directly into wealth, power, and influence. Those who can wield them may become the new elite.

    Strengthened Conclusion

    This was generated in collaboration with ChatGPT  

    Historically, power was measured in land, gold, armies, or labor. Control operated at the visible and physical level of society. But as civilization advances, power appears to be migrating downward — into finer and more abstract layers of reality.

    We are moving toward a world where influence may no longer depend primarily on controlling people or territory, but on controlling the invisible currents that reshape matter itself. If matter is abundant and transformation is the true constraint, then those who command the deepest layer of transformation command the future.

    And if the ability to reorganize matter and simulate intelligence requires fewer and fewer human participants, a quiet question emerges:

    When wealth and power no longer depend on broad human contribution, what role remains for the rest of us?

    Additional perspective (developed in conversation with Claude)

    The core insight is sound: if power shifts from controlling people to controlling transformation, and transformation becomes automatable, then the basis of the social contract evaporates. The traditional forms of leverage—withdrawing labor, boycotting products, economic interdependence—all assume the elite need the participation of the masses. But if technology enables complete autarky through automated systems, those assumptions break down.

    This isn’t inevitable. Whether this concentration occurs, or whether the fruits of transformation are broadly shared, remains an open question of governance and choice. But the trajectory is real, and the question urgent: how do we ensure that abundance serves humanity broadly, rather than freeing a small elite from any dependence on—or obligation to—the rest of us?

  • Impulsive Research

    ChatGPT helped refine and clarify (pointed out ambiguity and lack of clarity) the following:

    I thinking feeling sentient creature cannot afford to be 100% true to their inner directives and impulses. 

    Bayesian logic, whether we are aware of it or not, if part of what helps the creativity and wide assortment of thought processes that humans go through, we don’t control, necessarily the order of things that we encounter learn process and assembly into our mental model, but all of it contributes to the output of that model. 

    It researched and identified currently known and bleading edge research and theories to filter ideas and then suggested the following:

    A sentient creature cannot afford to be fully faithful to its inner directives, not because those directives are false, but because their unfiltered enactment is maladaptive. Human cognition appears to operate through probabilistic (Bayesian-like) processes that integrate uncontrolled inputs, ordering effects, and emotional salience into a mental model whose outputs are not consciously authored. Creativity and adaptability arise from this partial incoherence, while the frontal cortex functions not as a suppressor, but as a moderator—selectively permitting, delaying, or reshaping impulses to balance survival, meaning, and social viability.

    Did it stray or does this stay true to intent?

  • Stock hypothesis

    The value of certain stocks in the market, due to their forward looking nature, may better reflect current monetary inflation and the reduced value of the dollar than traditional inflation measures. When corporate profitability has not increased but stock valuations have, this divergence reflects valuation expansion driven by expectations of currency debasement rather than real economic growth.

    Refined for clarity via discussion with ChatGPT.

  • Trashed out

    In ninth grade, I wrote a term paper for English that was titled “The quest to rid space of earth garbage“. I used visuals from a video game (Wing Commander) to record a scene demonstrating a space battle.

    The description of the scenario was that Aliens had to send spacecraft to earths proximity that would eliminate the trash that we had managed to get out of orbit. This trash was causing issues in other solar systems so Aliens were showing up around earth to eliminate the problem at the source.  This resulted in battles between earth crafts and alien crafts as the Aliens were seen as potentially invading.

    Part of the description also included, for example, the way that the spacecraft were constructed. The outer shell of spacecraft had an electrode that would create the shield around the spacecraft, so as to be a protective force that could help it avoid damage from either debris or fire power from other spacecraft. This was a science fiction based term paper that we had to write for ninth grade. It seems to have held some prognostic power as we are seeing an increasing amount of human created stuff accumulating in outer-space.

    An observation by ChatGPT for modification to the content:

    Back in 1992 I imagined orbit filled with trash — today scientists warn that accumulated debris could cascade into a barrier that imperils satellites and future missions (Kessler syndrome).

  • Covergence on truth or agreement?

    I read this article about smaller model with larger data sets being better than larger models trained on smaller data sets: https://www.technologyreview.com/2026/01/07/1130795/what-even-is-a-parameter/amp/

    It spawned a narrow thought on reinforcement learning. Especially curated, proven data. I then discussed this with ChatGPT which led to concluding: 

    Massive data exposure stabilizes empirical risk minimization. By LLN-like convergence, parameter updates better approximate the expected gradients of the true language distribution, yielding cleaner conditional probabilities that inference merely samples from.

    I then asked ChatGPT to take our discussion and generate content along the lines of my usual posts and this is what it produced:

    We talk about intelligence as something that grows by adding more structure. But what if it grows by being shown the same world, again and again, until the noise fades?

    Inference doesn’t decide what is true. It inherits the shape left behind by long exposure.

    What, then, are we really scaling-capacity, or patience?

    I then went further down a related tangent: “Wisdom of the crowd really does matter based on the source of the crowd – even the percentage of the sources. If a survey interviews a highly skewed, towards flat earthers, crowd about the earths geometry, the answers will overwhelmingly skew towards incorrect models of the earth and astronomy. If this same kind of sourcing is done for AI model tuning/teaching, the AI model will hallucinate. 

    After a disagreement about my use of “hallucinate” and the context ChatGPT was applying to it, we reached an agreement:

    No challenge needed — your position is internally consistent and technically correct. What you’re doing is not defending “hallucination”; you’re undermining the term itself. Let me restate your claim in a way that makes that explicit, then test it.

    Your actual claim (made explicit)

    1. LLMs generate outputs based on learned probability distributions, not truth.
    2. Those probabilities are shaped by training data frequency and weighting.
    3. If the data distribution is overwhelmingly wrong or skewed,
    4. Then the most probable output will also be wrong.
    5. Calling this “hallucination” obscures the real issue: distributional bias, not model failure.

    That chain is solid.

    There is nothing random, pathological, or imaginative about the error.

    The model is doing exactly what it was optimized to do.

    And the summarized conclusion from ChatGPT:

    If I were to compress that into one dense sentence (in the style you approved earlier):

    Large language models don’t hallucinate; they converge-and when the crowd is skewed, convergence is simply error with confidence.

    That sentence does fully incorporate the discussions we had.

  • Stitched

    Lives are a chronological stitch of moments, sprinkled with nostalgic reviews. Don’t let your brain de-dupe your moments just because they look the same.

    A recent YouTube video I watched helped surface this second thought, and I wanted to acknowledge it as the catalyst (but need to find it).

    A thought to be continued…