AI

The AI Feast

Before we discovered fire to cook our food, we spent a significant amount of time chewing. Consider gorillas, who, according to a nature show I watched, chew for hours each day. Some mountain gorillas even spend half their day gnawing on their food. But introduce fire, and you have a barbecue. The food is prepared quickly, and our bodies don't have to expend nearly as much time and energy breaking it down for digestion.

This is how I view AI technologies like ChatGPT. They're revolutionizing how we consume and process information, aiming to foster knowledge. They encourage us to think about thinking, and in doing so, they can help us better understand ourselves. Indeed, before we can effectively communicate with others, we need to comprehend ourselves. By gaining a clearer sense of our own worth, we're more likely to treat others as though they hold similar value.

However, there's always the risk of veering off course, even with the best intentions. This happens easily when we mistake the model for reality. We've been gnawing on leaves, and suddenly, we're presented with an all-you-can-eat buffet. Considering the current state of global health—with many countries, if not the entire world, struggling with obesity and poor health—the implications of this new cognitive feast could be substantial. It has the potential to amplify both benevolent and malevolent powers.

In AI and the future of humanity | Yuval Noah Harari at the Frontiers Forum , Mr. Harari breaks down some of his concerns. He’s not worried about terminator robots, he’s worried about how easily people are persuaded to do things that aren’t in there best interest. He makes a compelling point about the transformation of algorithmic functions from attention capturing to intimacy. Ultimately, he appeals to us to appreciate the power of language and leaves me wondering how little we even understand the degree to which language (a technology, and the very thing that makes up all the blocks for all our models of the universe) can be hacked, and us along with it. Now go chew on that for a couple hours.

AI Alignment: First Principles

The Intersection of AI Alignment and Self Alignment: A Case for Physical Practices

I’m not going to beat around the bush, I’m just going to say it plainly. Achieving AI alignment is a goal that first requires self-alignment. We cannot expect to correct an external relationship until internal balance is maintained. Otherwise, we will quickly find ourselves adrift in our own delusions. So here’s my belief: teaching physical alignment through practices like martial arts (Tai Chi specifically) will help individuals mentally and emotionally prepare themselves while seeking AI alignment solutions.

Developing Self-Awareness and Self-Regulation

Physical alignment practices help individuals develop greater self-awareness and self-regulation. By practicing mindfulness and present-moment awareness, individuals can develop the ability to recognize and regulate their own biases, emotions, and thoughts. This can help them approach their complex work with greater objectivity and clarity.

Fostering Empathy and Compassion

Physical alignment practices can also help individuals develop greater empathy and compassion for others. This is not only a critical skill for effective AI alignment but also for just being a kind person. Acknowleding our imbalance, our biases, means being vulnerable. Being vulnerable doesn’t take courage, it builds courage. A deeper understanding of this helps develop a deeper sense of connection and understanding with others. This allows us to take on and better appreciate the perspectives and values of different stakeholders. I’d say that was important to the development of AI systems.

Building Discipline and Resilience

Physical alignment practices can help individuals develop discipline and resilience. These are valuable traits for cybersecurity teams and other professionals working in the tech industry where burnout seems to be a critical issue. By developing the ability to focus and persevere in the face of challenges and setbacks, individuals can better navigate the complexities and uncertainties of AI alignment and cybersecurity.

Reframing Power and Conflict through Tai Chi

Practicing Tai Chi specifically means learning to approach conflict differently. The use of power is redefined because what power is and where it comes from is transformed. There is no clenched fist, there is no seeking of power. There is plenty of power all around, and more importantly within us. The problem is that we have been told that there is something wrong with us and something must be added. When in fact, it is the opposite. There is more to us than we can imagine and power is not force, but control, and knowing the minimum effort necessary is the best possible policy. Strength isn’t in the breaking, but in the holding up, learning to support ourselves and each other.

Conclusion: The Benefits of Physical Alignment Practices

Overall, by teaching physical alignment practices like martial arts to employees and cybersecurity teams, organizations can help develop the skills and perspectives necessary for effective AI alignment and cybersecurity. These practices can help individuals develop greater self-awareness, empathy, discipline, and resilience, which can ultimately contribute to more ethical and socially responsible AI systems. Additionally, promoting physical and mental wellness among employees can also contribute to a healthier and more productive workforce, which can benefit the organization in many ways.

I encourage you to consider incorporating physical alignment practices into your own life or workplace. The benefits are manifold and the impact on AI alignment could be profound. Oh, and if you need someone who teaches Tai Chi and is into cybersecurity- I know a guy.

Dawn of the Bot Hunter

It’s raining and the morning sky is still dark, but the light is slowly shifting from ebony to blue. 

I’m thinking about Bladerunner as I listen to the rain. Harrison Ford narrates my near-future dystopian fantasy as a billion drops per second shower the world. I imagine each drop a malware-loaded bot, a digital armada with greater power than humanity has yet amassed but smaller than an atom, slamming against my firewall. 

Good morning, it’s a great day to hunt bots.

The information security company WhiteOps is the genesis of this daydream. Claim to fame: authenticating trillions of online interactions. The service: determine if it’s a bot or not. 

That’s what reminds me of Bladerunner, the Voight-Kampff test from Ridley Scott’s cyberpunk masterpiece. A digital detective tasked with identifying bots imitating humans. Sounds like another way of saying non-human investigations. So spooky and suspenseful, I’m definitely going to need a trench coat.

Detecting and defending against bots isn’t the future. It’s now. These bots are the new tanks and the next-generation super-cyber bombers. Consider how devastating the German u-boats were to the battles in the Atlantic. Bots are cyber-dimensional submarines exploiting the trade routes of the internet. They are electric ideas driven by algorithms with ambitions. And one of their greatest powers is passing as human.   

WhiteOps has a position open: Threat Intelligence Investigator. That sounds slick enough to me. If there is an AI that loves me, then there will be a bright and shiny circuit-badge with this gig. Just once, I want to unfold my wallet, flashing my ID, and say, “I’m Investigator Twitchell, this is my partner, we’re looking for some bots that were spotted in the neighborhood.”

I sent in a resume and cover letter a few days ago. Not just because Threat Intelligence Investigator sounds badass, it does, but also because figuring out what is human online is essential.  

If you find my words dramatic, well then don’t read this report on fraud and definitely don’t read this article on the AI-containment problem. And most definitely don’t read this one about Facebook being a Doomsday Machine with 90 million bots lurking around trying to friend the planet to death.

I hope to hear back from WhiteOps, but if not, I’m still going to hunt bots! 

And once I find them, game on. Ding ding goes the boxing-ring bell, let the match begin. In this corner hailing from 3-dimensional space fighting for humanity and weighing in at 170-pounds of bravado and hyperbole, Jay “The Bot Hunter” Twitchell. 

Well, like my grandfather used to say, “If you’re going to fight robots, you need to go to robot fighting school.” So, before my certificate of completion as a Digital Detective (artistic license with title) arrived, I was already signed up for a 4-day SOC analysis course with Black Hills Information Security taught by John Strand. 

SOC is short for Security Operations Center. It’s where the cybersecurity team responds to possible intrusions into the network. Picture a cyber-war room. Kinda like a NASA launch control room, with a two-story wall covered in screens, flashing red and green lights, maps from missile command, and graphs and dashboards keeping the score of the living and the dead. In the heat of it, sweat flowing from every brow, a dozen people furiously typing on keyboards, faces aglow in the wash of screen light, whispering battle commands into their microphones. 

SOC Analyst Level 1...gets that team’s coffee. Everybody’s got to start somewhere. As a coffee-dog and bot spotter, you let the team know about a flashing alarm and then Level 2 and 3 deal with capture, containment, and neutralization. You survey the network like a bushman on the savannah scanning for evidence of predators’ digital skat, dissecting packets, and looking for paw prints of persistent connections in silicon. 

Information security is totally hunting the hunter, spy vs spy. Just not the fast cars and jet packs, but instead SQL injections and rootkits. And If you're going to hunt down the enemy, you have to learn how to read the threat landscape and appreciate the tactics. To hunt a fox you must become a fox, yes? You need to know the methods so you can spot the signs that you are being stalked. 

John Strand is a great resource for honing cyber-safari skills. John is formerly a SANs institute instructor (15yrs) and runs BHIS, a cadre of devious cyber ruffians. 

A quick summary of the 4-day course:

There is no one product or strategy that is foolproof. Anything, given time and persistence, can be bypassed. The trick is layering the network with enough security gambits that it costs too much time and/or sets off enough alarms that an attack can be prevented or quickly resolved. The idea is to create a layered web. A spider uses more than one string to catch a fly. 

Endpoint analysis and common command-line magic tricks combined with a slew of open-source network monitoring tools and Shazam, you can respond to an incident. Right?   

Hmmm...not so fast. Even a good plan won’t help you if you aren’t used to responding to threats. There are a couple of fun quotes about this,  “Everyone has a plan until they get punched in the face.” and “No battle plan survives meeting the enemy.”

This is why you hire penetration specialest-teams like BHIS, and run attack simulations. If you can’t afford that, then attack your own system and test the defenses. Sounds like martial arts to me. Seeing as how I’ve paid professionals to beat me up most of my life, I totally get this principle. When you're getting your ass kicked isn’t the time to discover you're not ready for an ass-kicking. No one has time to think when they are getting pummeled. It takes practice to learn to roll with the punches. 

And if you're going to pay someone to cyber punch you, John and his team seem like the right kinda people. 

My takeaway from the 4 days: John is a passionate and generous instructor. The class was pay-what-you-can. So, the cost wasn’t an obstacle for the education. And I’ve rarely seen someone outside of a Pentecostal tent so evangelized about their work. It’s great to see that this field can keep a fire alive in the belly. Borders on inspiring.

My favorite quotes from the course were:

“You don’t get paid for the good days, you get paid for the bad ones.”  

and

“You don’t train until you get it right, you train until you can’t get it wrong!” 

To get your own dose of John, listen to this Darknet Diaries podcast where he shares stories about all kinds of penetration testing. One story involves his mother popping shell on a prison system. Below is the podcast and an article from Wired for the extra curious (it’s totally worth it).

Darknet Diaries - 67: The Big House (google.com)

(Darknet Diaries is my favorite podcast)

How a Hacker's Mom Broke Into a Prison—and the Warden's Computer | WIRED

I signed up for another course in March: Active Defense & Cyber Deception. I also enrolled in BHIS’s Cyber Range where you can build your cyber skills and supposedly compete for a position on the BHIS team. I also bought a t-shirt. I know it’s not quite a trench coat, but it’s a good start for the newest bot hunter on the block. Watch out, robots. I’m coming for you.


Digital Humanism

Sam Harris and the inventor of Virtual Reality, Jaron Lanier

This podcast is from 2018, but don’t let that fool you. This is still important ground to consider. It provides a measure for what kind of changes have taken place since this conversation.

One of the biggest points here: the value of creative ideas. Ideas act as the building blocks for shared values. And culture emerges from shared values.

AI and the Great Filter

Lex Fridman & Max Tegmark discuss AI and the future of Humanity. I came across this podcast researching machine learning. What a treasure. These guys cover a lot of ground in three hours. Here are my favorite topics:

(08:15) – AI and physics
(21:32) – Can AI discover new laws of physics?
(30:22) – AI safety
(47:59) – Extinction of human species
(58:57) – How to fix fake news and misinformation
(1:59:39) – AI alignment
(2:05:42) – Consciousness
(2:29:53) – AI and creativity
(2:41:08) – Aliens

After you make it through the whole thing, please share with me what you think about the concept of the big filter?