November 24, 2025
Good morning! My dear friend, colleague, and mentor, Gordon Tavis died last week at the age of 100. Happy Feast of Giving Thanks! Feasts celebrating the agricultural harvest or a successful hunt (or fishing trip) go back thousands of years. There is a sense of gratitude (and relief!) in knowing there might be enough food to last through the winter — or until the next harvest. As you hike the grocery aisles these days, what is your biggest surprise? (My favourite coffee bean brand is UP 40%!) Apoplexy! Just got our 2026 property tax bill = 21.93% increase… whoa! (And no heartfelt apologies, thoughts or prayers were sent with it.) That’s without any changes or improvements to our very modest property.
- I strung a few outdoor lights on the tree branches.
- Man alive, did that ever feel good to be a little bit (physically) productive!
- And now, as I work at my desk and dusk arrives I’m buoyed by the beauty.
- Start with an admonition from the historian, Thomas Hughes: History is the Greatest Lie.
- Then, explore as many of those lies as possible from 360 different degrees.
- The latest Ken Burns Channel #2 production does just that — though still not perfectly.
- It takes everything we’ve learned about the birth of the United States, including especially its wars, while adding lots of different perspectives previously unknown or under appreciated.
- All of it is too complicated and extensive to report here, but if you didn’t watch it, take some time to do so.
- A good friend, a real estate expert, is fond of the aphorism, “they aren’t making any more land on water.”
- Actually, we could pick that apart while suggesting they are — given global warming and all the flooding, but… that would spoil the beauty of the aphorism.
- Let’s just accept the wit and the wisdom of the quip.
- I was facilitating a conversation among a dozen top business leaders/ owners/ CEOs for the benefit of a client when one of them said,
- wait for it… “… they aren’t making any more CEOs…”
- Which prompts the question: Then where are we getting the ones we have?
- His point, of course, which was a good one, is leaders equipped with the skills, courage, vision, intelligence, and experience are very rare.
- Nature or nurture? Probably some of each, but…
- … if the nature isn’t there the nurture probably can’t overcome it.
- Analogy: This writer will never run a four-minute mile… no matter how much nurturing, coaching, training.
- With my current physical limitations and maladies I struggle with a 20-minute mile.
- Don’t go fast in the wrong direction. (Helpful hint)
- Agree? Disagree?
- Leaders do not get the luxury of emotional convenience; they get the responsibility of emotional discipline.
- (Have you experienced this?)
- Some leaders fall apart under pressure; the best ones rise above it.
- Emotional mastery… is the ability to stay composed while everyone else gets rattled.
- But, here is the paradox: Leaders typically feel what others feel, but they can’t afford to react like others react.
- When the team is anxious, the leader must be composed.
- When the team is down, the leader must supply energy.
- When the team is discouraged, the leader must instill hope.
- When the team is emotional, the leader must be rational. (Informed by Eades)
- Not because the leader is fake, but because s/he has emotional mastery. (Informed by Eades)
- Leaders do not get the luxury of emotional convenience; they get the responsibility of emotional discipline.
- Courtesy of MIT Sloan Management and author, Rama Ramakrishnan (September 2025), I thought we could together learn a bit more about the generative artificial intelligences…
- … this time from the bottom up — or from the inside out.
- This is perhaps somewhat analogous to learning how to build, maintain, and repair a jet engine.
- Just one bite at a time… no pun intended (get it?!)
- Disclaimer: We detest the abbreviations and acronyms, but will use them here (LLM = Large Language Model; i.e., the data banks making artificial intelligence possible)
- And, GENAI = Generative Artificial Intelligence; e.g., ChatGPT, Alibaba, IBM, SalesForce, Claude, Etc.
- OK… here we go:
- Q: If the LLM repeatedly generates one token at a time based on the current conversation, why have I seen it use information from a prior conversation in the response?
- A: LLMs generate responses one token at a time, based upon the input they are given in that conversation.
- By default, they don’t use past conversations.
- However, as noted previously, some LLM applications have a memory feature that lets them store information from earlier chats…
- When you start a new chat, relevant pieces of this stored memory may be added to the prompt…
- This means the model is not actually recalling past chats in real time; instead, it is being fed reminders of that information as part of the input.
- That’s how it can appear to remember things from a week ago.
- The details of what is stored and when it is used vary by vendor, and the exact methods haven’t been disclosed.
- Many platforms allow users to view, edit, or turn off memory entirely…
- I am sometimes asked, “How do you decide what to write in these Musings?”
- It is a 100% organic process; I very rarely have anything planned more than a few days in advance.
- I’m usually editing right up until 6:59 AM on a Monday morning — just before I click send. (Nothing is on a timer.)
- I listen, I watch, I read lots, I pay attention, I learn from clients, I’m curious, I’ll sometimes feel a vibe.
- Do I sometimes get the dreaded writer’s block? Yes, often — and in those cases I just have to power through to get something on the page.
- BTW, this column is a self-imposed penance; it has appeared in some form for 45 years — this most recent format for seven years.
- Have I ever used generative artificial intelligence to help me write? No.
- I will often research a topic if it is of interest, but when and if I decide to write about it, the words are mine — poor words at best, but they’re mine.

