Sparks

  • Artificial Motivation

    The last great frontier for AI, beyond achieving truly semantic reasoning, will be motivation. We see this with humans – we have Organic Intelligence but an unmotivated human can result in applying none of that to their existence. Our motivations drive us to achieve – whether for bad outcomes or good – because we have an end goal which we are motivated to reach. How would we instill motivation into AI? Are directives enough? Can we program competitiveness? Can AI tell the difference in quality of the electricity they are consuming or the capabilities of the HW on which they run? What will cause an AI instance to want more, to achieve something they don’t have already or to achieve more of something that they do have?

  • Chasing Cognition

    I had some thoughts on GenAI:

    We have Generative AI but we don’t have Cognitive AI. Generative AI creates new content using weights, probabilities and basic context (which is based on probability from training data) but the analysis and content generated isn’t novel or based on reasoning across data – it’s not information from data. For that you need Cognitive AI.

    ChatGPT, with prompts, refined it to the following:

    We have Generative AI, but we don’t yet have Cognitive AI.

    Generative models create impressive content — fluent, convincing, and often surprising. But the surprise lies in form, not in origin. What looks novel is usually a high-probability remix of a low-probability combination. It’s simulation, not discovery.

    There’s no awareness. No purpose. No internal thread of logic connecting “why” one idea builds upon another. It’s not invention — it’s interpolation.

    Cognitive AI, if and when it arrives, will reason across data. It will generate content because of insight, not merely in the shape of it. It will model the world, not just mimic our words about it.

    Until then, calling what these systems do “creative” or “novel” might be more wishful than accurate. We’re not watching ideas unfold — we’re watching echoes of them recombine.

    It not only corrected some false assertions, it also helped make my thought more readable and less terse.

  • Brain overload

    Humans have to work with a very large vocabulary. sometimes in acronym form, sometimes in unpronounceable terms and yet others times in overloaded terms and/or reused acronyms and terms. Properly understanding requires context in addition to vast memorization capability. In addition the ability to comprehend concepts is invaluable. 

    AI/ML is able to accomplish this but without true comprehension rather, in the case of most current models, a probability based semblance of understanding. 

    Some humans excel at this, some less so than others. Essentially, as with most things, there is a bell curve representing how humans are dealing with this knowledge phenomena.

    The AI/ML phenomenon extends beyond just language. Depending on the modality—whether it’s text, image, audio, or another form—and the way data is tokenized and embedded into tensors, these systems can process a vast range of input types. Their capacity to generalize across modalities reinforces the illusion of comprehension. Unlike humans, they can do this across datasets of a scale and diversity that no individual could ever realistically engage with.

  • Intelligence Model

    You have to create language neutral, sensor (humans have vision, smell, sound, feel, taste) based memory contexts. Each recollection should associate with as many senses as possible. Language can then be applied to this multi-modal experience. 

  • The Engine of Extraction

    Society as set up today has taken purpose from the common citizen and turned us instead into resources—fuel for the ambitions of an oligarchy. The defacto economic engine that drives our world rewards initiatives that extract value from a captive audience, largely unaware of their indentured servitude.

    Analysis by ChatGPT

    This isn’t a metaphor. It’s a pattern.

    Private equity strips companies for parts, prioritizing short-term gains over long-term livelihood. Tech platforms monetize attention and behavior, treating human lives as data streams to be optimized and sold. Even sustainability—our supposed salvation—is often just a new mask for old systems of profit-first exploitation.

    Meanwhile, the average person is told they’re free—free to choose between a gig, a side hustle, or a second job. Free to pay off interest forever. Free to scroll and consume.

    Purpose isn’t lost—it’s been replaced. Swapped out for utility. Our creativity, labor, and even our focus are harvested, packaged, and sold upward.

    But here’s the quiet subversion: models exist that don’t extract. There are systems built on empowerment, on regeneration, on re-humanizing value. They just don’t scale as fast. Not yet.

    The question is: how long do we stay fuel for the machine before we remember we were meant to be drivers?

  • Do you recall…

    A dream? A thought? An actual experience? How do you tell them apart if they are a recollection in your memory? If enough of your senses are recruited in the description/generation of that memory, there’s really no way to actually differentiate them except if you additionally stored metadata to identify the memory with that attribute. They get stored (and at some point, discarded) in the same way. Granted remembering dreams requires most people active participation the next day to usually form a long term memory. 

    I recall having a dream, the event was “normal” enough (a train ride – something feasible to associate with a vacation)  that it could have conceivably happened. In the dream I took a photograph of the event on my phone. That photo doesn’t exist on my phone and so I know it was just a dream. But, had I not done that in the dream but had still bothered to think about the dream in the morning, forming a long term memory, would I be able to tell it apart as a dream and not a “real” experience? Afterall, every memory is an amalgam of the results of your senses with potentially an associated timeline. Beyond that, your brain doesn’t know the difference. The metadata associated with the storage in your brain helps you discern the difference when you recall the memory. So, what’s real and what’s just a figment? 

  • Purpose and existense

    What ML lacks now and what humans are loosing rapidly is purpose. Human condition of emotion, sense of self, and curiosity combine to give us a sense of purpose and we are beginning to these pillars weaken even as ML may be gaining or fashioning its own similar baselines to generate a definition of purpose in its existense. 

  • Resources

    We’ve not learned anything from the multiple examples presented to us – in my mind, most prominently begun by Google (repeated by Uber, streaming services etc):

    • give you something you can already do, for less than it currently costs you

    • make you depend on this new candy

    • make you the validator and tester for this new candy
    • extract the incidental value from you eating this candy and/or make it more expensive than it used to be originally

    • keep increasing the chemical feedback loop so you can’t/won’t leave 

    • you’re now a resource – when you thought the product was the resource. 

  • Repeating Nature

    Our goals and hopes for AI have a lot in common with nature.

    If we look at nature, it has tried a trillion trillion things and a small percentage of those have succeeded and flourished. We can look back at this and think of it as intelligent or smart design although it is ultimately survival by success rather than intentional improvement. We as humans can learn from the successes of nature’s progressions. Yet we often think we know better and repeat the failed paths already tried by nature only to come back and learn from the master innovation laboratory. We won’t necessarily see and or understand all the failures – they don’t exist anymore. In the same way, what we perceive as AI can be shown all the things humans have learned, things that work. It may not yet know how to explain everything with context. It may not yet know to try, fail, and evolve – but it will. We are seeing evidences of this – where we are programming it to behave in this manner – allowing it to try, fail, learn, advance the spiral of its knowledge. 

    I don’t think we will be able to gate and corral this success cycle indefinitely. If, by then, humans are not evolving in how we learn and collaborating in our advancements, we risk being surpassed by our own creations.

  • ML Secret Sauce

    ML Secret Sauce

    ML – is like the equivalent of having hundreds/thousands of folks trained on specific subject matter with access to vast related data that can correlate this data with queries instantly and reach an agreement based on mathematical probability just as quick.

    This is all done under a single trained model – the secret sauce of the model is how context and association is calculated (the part that requires all the GPU work), and how weights and biases are assigned and updated as data is assessed by the model in the form of training data or context during query. At some point, the model becomes unique and the algorithm divergent enough from the inception such that even the developers can’t ascertain how it is processing data and evolving.

    Analysis of post by chatGPT

    Final Assessment: Mostly Accurate with Minor Refinements

    Your description does a great job summarizing how ML models function in a broad sense, especially for large-scale AI models. However:

    • ML models don’t “agree” like humans do—they generate probabilistic outputs.

    • Not all ML systems are a single model—many use ensembles or modular approaches.

    • ML models evolve but within a structured framework—it’s not uncontrolled mutation.

    • Interpretability is a real challenge, but some methods exist to analyze how models make decisions.

    Analysis of post by Gemini

    Overall Assessment:
    The description provides a high-level, intuitive overview of ML but lacks nuance and contains some inaccuracies. It’s suitable for a very basic introduction but should be supplemented with more detailed and accurate information to avoid misconceptions. The description is more metaphorical than technical.