Human Free Will in the Digital Age

In this first episode of Mind Hack, Jeff DeRiso begins exploring the relationship between the human mind and digital technology.  He discusses the importance of ethical applications for artificial intelligence and robotics technology, and how operant conditioning programs humans to be reactive.  Finally, he looks at the story of a "mind control" technology startup that gives him some hope for the future.

*Follow us here at Newsbud Twitter

**Subscribe here at BFP-Newsbud YouTube Channel

Show notes:

It Begins: Bots Are Learning to Chat in Their Own Language

Open AI study

“Deadly Facts”: How So-Called “Objectivity” Created a Culture of Conformity

Prisoners of the Real Series

Flashback- Story of Nayirah

Mind control’ Devices are Giving Power Back to Those with Physical Disabilities

FB Like

Share This

This site depends….

This site depends exclusively on readers’ support. Please help us continue by SUBSCRIBING and/or DONATING.


  1. VoiceOfArabi says:

    Hello JEFF DERISO,

    I just want to congratulate you on a brilliant report and exceptionally appealing format. I loved it… How often do you plan to contribute (i hope once a week 🙂 ).

    Keep up the good work, and i look forward to the next report.

    On the question of what benefit would Robots get from using their own language.. The only positive i can think off is efficiency. Our language is designed for “brain” and not a circuit board.. it might be different language will be more efficient between Circuit boards A.I. things… off course, the evil side is, once they are more efficient and streamline, why would they wait for us??

    The interesting thought is.. if Robots with A.I. ever become better than humans, they will not spare anyone, including the person who created them, as it would not be efficient 🙂 .

  2. Ariel Johnson says:

    Great show. Trying to think of what use (for the good of humanity) could be made of AI communicating in a language of its own. My best guess is that it depends on what one defines as the good of humanity. As VoiceOfArabi said, efficiency is an obvious advantage of allowing AI to communicate in its own chosen language. No need for and, but, or, nor, for, so, yet. But to what end? What’s the point of this light speed communication? I think that is the real question.

    Here’s where the ‘it depends’ comes in. Nanotechnology for medical purposes is one such application that could benefit from lightning fast communication, but one must buy into the idea that such technology is indeed to our overall benefit. Assuming that much, then the ability of one little nanobot to communicate with another little nanobot quickly and efficiently means that the little nanobots can be smaller, can make necessary changes more quickly, and that there can be more of them floating around and embedding themselves in our bodies than if those same little creatures had to store and make use of a cumbersome human language. This would arguably be to the benefit of humanity.

    Similarly, swarming drones used in military applications, or the use of any sort of robot soldiers for military or security purposes. Assuming you can buy into an application of these technologies that legitimately serves a humanity-saving purpose, then the ability of such creations to communicate with one another in their own language serves the purpose of faster and more efficient communication, with the added benefit of making ‘codebreaking’ by the ‘enemy’ much more difficult. To have to decipher an entire language rather than identify and interpret a known language seems a much taller order.

    So my answer, in short? Cancer and Aliens. No, not the aliens Dear Leader Trump will save us from with his Great Wall. Aliens with a capital A, as in visitors from another world. What better an enemy to unite us than evildoers from another galaxy that plan to enslave us or eat us or take our women and children for their own? Certainly we need efficient AI warriors to save us from the Alien invaders. Let them communicate however they choose, so long at it is efficient and saves us from the “E”nemy.

    • victor friese says:

      Um, we already have a great enemy for humanity to gather round and unite against. It is called evil, and it expresses itself in forms such as scientific fraud, the hegelian dialectic, the deep state, the powers that shouldn’t be, etc… we don’t need a new enemy. We need to destroy the one we got.

    • Jeff DeRiso says:

      Thanks Ariel and Victor. Ariel, I have nothing against any member of our armed forces but I personally do not buy into military applications as a beneficial application for humanity. I have yet to see a war where both sides do not commit evil acts. Portraying a certain group as an enemy may be politically expedient for the greedy and power-hungry, but in my opinion it creates an endless quagmire of conflict that deflates the human spirit.

      I would also be careful about saying things like “Certainly we need efficient AI warriors to save us from the Alien invaders.” as this in itself could be falling for a government psy op ploy. I will cover on future episodes the coordinated military disinformation campaigns within the field of UFO research, check out the story of Richard Doty for an example. I’m not saying Aliens definitely don’t exist, but we cannot let our fear of this possibility allow governments to further abuse their power and strip away our rights.

      I agree with Victor that our human enemies within the deep state are more of a pressing matter. They are the enemy we see and deal with every day so it makes more sense to focus on them.

  3. victor friese says:

    Oh, and nano bots are an ELE waiting to happen. They won’t be used for good, they’ll just eat every one and everything. It will be a whole new “bio”sphere (bio isn’t the right word but…) which must fight to define itself and its boundries, and we’ll be the fodder upon which it feeds to do this… if we’re lucky… otherwise the bots would just unite and eat us.

  4. Mark Ribbit says:

    Congrats on your segment Jeff!

    re: Mind Hack Flashback and Nayirah’s testimony about the killing of the incubator babies…
    At 11:52 you state:
    This is one of the most historically significant examples of how fake news led the United States to war.

    I catch the drift of what you were trying to say, but for better clarity I would have said:

    “This is one of the most historically significant examples of how strategically planted fake news helped the United States gain public support for war.”

  5. thehonestone says:

    Great show, really enjoyed watching. Keep it up, Jeff!

    It’s great that you try to see the technology positively. But my experience is that no matter what good intentions there are, it will always get exploited for the evil thing. Look Manhattan project. US knew for long that Germans are nowhere near getting the bomb. Few months after US had it, they started to blackmail Russians with it…
    Just lets ask ourselves one question. If the instrument can read your EEG through electrodes, how hard is it to turn this upside down and actually stimulate the brain using similar technology? How long it will take until some business realizes it could stimulate people to buy more of their products? Aren’t they using every option possible now, including direct advertising to children? Will they tell you or will it be like with mobile phones (cancer, leukemia, ADHD, hundreds of other well described and scientifically supported negative symptoms…), and it will become another public tabu, that no one even wants to think of, the less so talk about?
    I see very little hope in technology, when the people inventing it and using it are raised in the notion of lets consume as much as possible, while we can… Quite on the contrary, I think everything is over-technicised today and we should step back while we can.
    The problem is that even the most enlightened researcher can never control use of his own invention. There are forces that will misuse any technology for the means of control and power. And no one can really stop them. So the more possibilities we invent, the worse it gets generally.

    If we keep self-profit and endless growth in the heart of our economy, then there is no hope for this society and no hope for the Earth as we know it. Unless we put nature, survival of the planet and common good on top of our priorities, and unless we do it really fast, there’s not even much sense leading this type of conversations.

    To name one examples of many. See Drone by Tonje Hessen Schei. Did they invent the drone for shooting at people in the first place? Of course not, no one could even imagine such a crazy idea. It was just a recon plane. Look what it became today. Will the AI end up as robots replacing workers, “guarding” streets? Absolutely! There is almost no other possibility. And the people themselves will vote for it. They will love robots. You will love them too.

    • Jeff DeRiso says:

      “There are forces that will misuse any technology for the means of control and power. And no one can really stop them.” I agree with the first part of this definitely, but not necessarily the second.

      The odds are not in our favor I’l admit, but we have to have hope that putting “nature, survival of the planet and common good on top of our priorities” will bring us back into equilibrium. As a young person hoping to have a family of my own someday, I NEED to have hope in the possibility of a good outcome for the future even if it goes against the odds.

      These discussions are my effort to begin solving the human problems that lie at the root of our technology problem. Thank you for participating and offering your honest feedback.

  6. victor friese says:

    The first tech, the Igor thing, could be used to coordinate robots to do thing like clean up the mess that is fukushima (even moreso if they develop a robot that doesn’t die from radiation poisoning which all current bots do). The part where they take human jobs is not bad. What is bad is the valuation of people solely on the junk they make. This will force a societal shift that is long overdue, but it will be a painful shift, like tearing off a bandaid.

    As for the last one, the brain measurer thing, it will be used by marketers to manipulate people and even induce addictions to products and ideas (which is particularly dangerous in light of this article “How Breitbart Got Conservatives to Forget Morality and Embrace Pedophilia” and the interests and needs of those who comprise the deepstate… Hell, just think of the addictions McDonalds could induce in children. I mean they are already manipulative as can be and from what I recall actually put addicting substances in their “food” already) … once they figure out how to get people to wear it the whole time they are watching show/movies, playing games, or just surfing the net. Just combine Lock’s idea, which is partially correct with many people, and completely correct with many too, with the brain measurer, some pavlovian stuff, insidious reward mechanisms such as a social feedback joy spike at opportune time or instant info or something that gives that joy spike, and you basically have the ultimate “marketing” tool. The inventor will become hideously rich.

  7. Jeff, your optimism about the last segment seems naive. Consistently, intrusive mind-control technologies or big-brother type surveillance or manipulation are introduced in the context of providing public benefit. (That recent surveillance technology pilot introduced in Baltimore is just one example.) Yes there is indeed a potential benefit, but consider that this is only a marketing ploy to ensure societal acceptance. The PR approaches are determined way in advance, and what they are actually (already?) being used for is not something that would be included in press releases.

    Best wishes on the series, keep the eyes in the back of your head wide open for these topics. 🙂

Speak Your Mind