Artificial Intelligence and Data Security: Exploring the Future
Get ready to illuminate your mind with the dazzling lights of AI knowledge as we sit down with Dr. Frank Appunn, a seasoned tech entrepreneur and academic. He demystifies the world of AI and offers a tantalizing glimpse into its exciting future. We tackle everything from the fundamental building blocks of an AI solution to the diverse types of AI and their potential to shape a better world. It’s an intriguing conversation that will equip you with valuable insights into the current and future influence of AI.
But every silver lining has a cloud. As we delve deeper into the second half of our discussion, we uncover the dark side of AI, from its potential misuse by criminal elements to threats from foreign nation states. We’ll navigate the complex terrain of privacy laws changing in the wake of AI proliferation. We’ll also explore how blockchain technology can be a game-changer for AI privacy and how AI can enrich our lifestyles and productivity. Brace yourselves for a thrilling journey through the fascinating labyrinth of AI and data security, and remember – knowledge is the best shield against those who seek to exploit AI for harmful ends. Don’t miss out!
Show Notes
- 0:7:01 – AI Adoption in its General Terms (94 Seconds)
- 0:09:15 – Privacy Concerns With AI (122 Seconds)
- 0:18:20 – Caution in Adopting AI (82 Seconds)
- 0:14:57 – The Future of AI (124 Seconds)
0:00:01 – ANNOUNCER
You are listening to the National University Podcast.
0:00:10 – Kimberly King
Hello, I’m Kimberly King. Welcome to the National University Podcast, where we offer a holistic approach to student support, well-being and success the whole human education. We put passion into practice by offering accessible, achievable higher education to lifelong learners. Today we’re discussing the different types of artificial intelligence, Chat GPT, and what teachers and students need to know. Of course, AI is not part of the future anymore. It is the present. And according to Bruce McLaren, the president of the International Society for Artificial Intelligence and Education, he says that after speaking at an event that almost every question is about the intrusion of tech in the classrooms, Bruce McLaren says that it’s a valid concern, but believes AI is a net positive and probably the inevitable anyway. So listen up for today’s guest with very interesting information about AI.
On today’s episode, we delve into the ever-changing world of AI. And we’re talking to Dr. Frank Appunn, and he holds degrees from Nelson Mandela Metropolitan University, the University of Maine, Orono, and a PhD from Capella University. His background includes being an entrepreneur in the new technology arena for 20 years before moving to teaching. His passion includes innovating new approaches to improve outcomes in cybersecurity, project management and computer science. He has led the development of new degrees at four institutions, from bachelors to doctoral levels. Research interests include topics related to safe AI implementation, technology use for remote collaboration, cybersecurity, research methods and teams. Frank consults in the cybersecurity arena to the banking, healthcare and technology industries. And he’s an active participant and leader in cybersecurity organizations supporting federal initiatives. at National University, he is chair of the computer science and cybersecurity department and academic program director for the PhD cybersecurity. Wow. Welcome to the podcast. How are you? Thanks for joining us.
0:02:24 –
Thank you, Kimberly, this is exciting.
0:02:27 – Kimberly King
Well, we’re happy you’re here. Why don’t you feel our audience in a little bit on your mission and what got you interested in all of this before we get to today’s show?
0:02:36 – Doctor Frank Appunn
I think the overarching concepts that drive me are that, instead of waiting until we suffer damage, why can’t we read technology and prevent the damage? We don’t want to die to find out how not to die in simple terms. So knowing about the future is actually quite easy in electronics and in technology. We know where we’re going pretty well five to eight years out with relatively good information. So if we can get more defenders in cybersecurity, we’re off to a better future, and that’s probably the most important thing that drives me.
0:03:21 – Kimberly King
I love that. I wish you know I had that drive as well. I would love to get inside your brain because you’re a well ahead of all of us, aren’t you, with AI and I’d love that you. this is your passion, So I could probably talk to you for hours. Today, we are talking about the latest information on AI and data security, and so, Dr., what is AI and what are the pieces we should know about?
0:03:48 – Doctor Frank Appunn
So artificial intelligence is the ability for machines to mimic what we as humans do to some extent, and that might be as simple as taking a lot of data and making sense of it. When we talk about AI, we typically think of generative AI, which is a particular branch of AI that uses statistics to predict the next word. That’s really what it is. It has no real intelligence, and we have very sophisticated models to predict the next word or the next concept or what would be plausible in an image. The other alternative is adaptive, which is more constrained but very useful for specialist areas, but we could get the age of abundance out of AI, because it is a general purpose technology that can augment so many things. Some of us might know of people that would look at this purely as the replacement of humans, and we mustn’t play that game, because it’s not good for us and it’s not good for humanity. So we need to steer the right course I’d like to go through for folks that are a little more technical, what are the pieces? And if you’re going to create an AI solution, please make sure that you’ve got these pieces under control. There’s a lot of data from all sorts of sources. Do you have the rights to use that? We have algorithms. Do they have bias? Are they fair? Do they provide predictable and verifiable answers?
We do machine learning, where the machines go and gather the information and make sense of it. We’ve got natural language processing so that I can talk to it, and when I say “diary” with my South African background, the system knows that I mean “calendar” and things like that. And then we also got to realize we’ve got vision as an important part and robotics to do the mechanical things, and then we get to deep learning with advanced neural networks. Those are the pieces. We’re not learning the pieces here today, but we need to be aware there are many pieces. And that’s really where we are. This started for me in 2013 with Eric Brynjolfsson about the new machine age, and what was said there and what is happening now is fully in line, and he’s also gone further. And we’ve got to look at augmenting humans and we, as humans, must seek the augmentation.
0:06:29 – Kimberly King
This is incredible, and just because you said you started your interest in this in 2013, I’m just curious Oh my gosh, what does it look like in just in 10 years from now, 20 years from now? Where are we going to go with this?
0:06:43 – Doctor Frank Appunn
How about three years from now?
0:06:44 – Kimberly King
Three years from now. It’s going quickly, isn’t it? And you’ve seen it.
0:06:48 – Doctor Frank Appunn
Faster than that, yes.
0:06:49 – Kimberly King
Wow, oh my gosh. Well, as a regular person like myself here in the United States, what could artificial intelligence mean to me?
0:07:00 – Doctor Frank Appunn
AI adoption in its general terms is mostly as an individual, general purpose technology for AI, or Chat GPT as we know it, and we’ve got four choices. The first choice is to use it to replace what I think about, and I hand myself over to it to do my functions, which is a dangerous process because I’m no longer engaged, I’m no longer refining and I’m not finding the errors, or as they call them hallucinations, and things along those lines. My second choice is equally bad and that is to deny it. We shouldn’t use this. It is dangerous. Let’s run away.
The third area is also dangerous, which is what I call euphoria. AI can do this, can do this, can do that, can do that. I’m so excited. I read this email, I saw that podcast. Or, even worse, YouTube, because there’s so much inaccuracy in that. Which brings me to the fourth choice and the one I beseech everybody to do: Thoughtful collaboration with AI, augmenting me, allowing me to do more, that enablement of abundance, so that these advanced things aren’t only for the elite, but for everyone.
We look to help those that are less able and empower them with AI, so that’s what the regular person, I believe, would optimize themselves. Oh, at the nation at the same time. But we’ve also got to recognize that there’s some dangers. When we have a golden basket with a lot of value, data criminals and foreign nation states that mean us harm will exploit that. AI is a big golden basket. They will have hundreds of billions of dollars to break into that, as we’ve seen with so many other things. So we must be a little cautious. We must remember they can poison the system. You must be careful of bias. And, if you’re not sure, pause that part of AI.
0:09:15 – Kimberly King
You know what – and this leads me to my next question about privacy, which I want to ask you about – But I do know that, just even thinking about AI, we do it’s all the United States especially we all advance, and you just with those three years just now you know these are, this is going by so quickly but we’ve advanced from typewriters to computers, to landlines, to cell phones, and so I think there’s always been an issue of privacy. Tell me about privacy, specifically with AI.
0:09:45 – Doctor Frank Appunn
So privacy in general is undergoing a huge bunch of change. It’s going to be really difficult as every state has laws and the federal government has laws in foreign countries, and it’s going to be exceptionally expensive. We need to normalize this And, with AI, every interaction you do on the internet and things like this isn’t necessarily confidential. We have a large electronics company in South Korea that have lost a lot of intellectual property, so we need to be a little cautious. Hear this word coming through, all the time: caution.
And we need to recognize that we need to be more careful with phishing, with cyber, because they can now phish in 21 languages and there isn’t a single grammar error, so you won’t catch it that easily. So as we look around this, we need to be cautious and work forward. So about my privacy, I need to look after my information, I need to be careful what I disclose in various places. And I need to be a little suspicious, with caution. And the communication side. And, if anybody’s interested, the International Association of Privacy Professionals you can explore that further at IAPP.org. I’m not aligned with them, but they have a treasure trove, including a specialty area on AI. So if someone wants a deeper dive, there’s a place to go.
0:11:16 – Kimberly King
Excellent, that’s good information. Can blockchain technology help with AI privacy?
0:11:23 – Doctor Frank Appunn
It can. However, it’s just putting the two together, but improving identity through blockchain will help us with so many things. It’s just that we struggle to put the pieces together with banking, but it’s improving all the time. And I suspect at some stage it’s suddenly gonna learn and solve a lot of our problems.
0:11:44 – Kimberly King
Right, well, so how else does AI fit into technology, but also into our lifestyle?
0:11:53 – Doctor Frank Appunn
At an individual area, I can find a better recipe. I can ask very advanced search questions of search engines, which normally I’m not good at. I can get a lot of data. I can even have it sorted. So a lot of the things where you had a specialist, I, with less experienced knowledge and dedication, can do better. And this is where we can uplift people to be more productive and more self-satisfied in their life, always adding a little caution.
0:12:25 – Kimberly King
Right, right.
0:12:28 – Doctor Frank Appunn
So if I can just understand it and practice it and use the free quality sources, I’m going to be able to do more for my family and for my job. But I also add a bit of caution. Right, I think that’s the sixth time I’ve said caution, I’m gonna be saying it more (laughs).
0:12:46 – Kimberly King
Right and it is it’s an uncharted area territory here. And you just talked about. I was always gonna really have you further on the future of AI, but I was thinking 10 years out, but you’re saying three years and even before probably. What does that future look like?
0:13:03 – Doctor Frank Appunn
So AI right now is improving. And part of its improvement means that you won’t be able to detect it. I could take a piece of video and replicate your image to do anything I want and your parents wouldn’t realize it’s not you. That’s where we’re moving. A voice snip will allow the CEO of an organization to give instructions to the accountant to do a money transfer because it sounds so real. But if we’re aware, we can avoid the negative side. For example, I don’t trust what I hear and what I see unless I can verify it on preordained methods. So I’ll call the CIO, I’ll walk to the office, I’ll validate it in other ways. And if that becomes my way of life, I can get all the benefits without the potential raging fires that can do us harm. And that combination that I look at. Bad people will spend billions to control our ideas.
0:14:16 – Kimberly King
Wow, this is. It is so frightening to think about the bad part of it, but there are good things as well, obviously. We just need to be cautious, as you say.
0:14:25 – Doctor Frank Appunn
Yes, and that is our answer to getting the good and not the bad.
0:14:31 – Kimberly King
Right, right, I love that. Great advice. We have to take a quick break. Stay with us. We will be right back, but more in just a moment. And now back to our interview with Dr. Frank Appunn. And we’re talking about AI and data privacy issues. And, Dr. Frank, we were just talking right before the break about the future of AI and it’s here, so what else do you have to say about that?
0:14:57 – Doctor Frank Appunn
So the future of AI over the next three years is here already. It’s a little old. The artificial general intelligence – AGI. That’s when computers start doing everything, with us having to do less, and this is where we might go away from human augmentation to human replacement. So there are tools like Auto AGI and Baby AGI that string together 10s, 20, 50 different modules. And they’ll ask the questions for you without you having to worry about them, and they’ll ask the questions for the CEO without the CEO having to worry about them. And the point is, when you lose control, it can go off on a tangent that doesn’t suit you and you can’t own, but you will be punished for. So we got to watch that a little further, but that is now. In fact, it started last year. But that will improve over time. And we wanted to improve for good And therefore we’re a little cautious.
0:16:05 – Kimberly King
There it is again. And as a PR person in PR, I think this could go. I could probably have job security in the future, just like yourself. But you know, with crisis management, if somebody goes off the rails right with this AI? Oh goodness. What about organizations? Should they care or should they be worried?
0:16:27 – Doctor Frank Appunn
Organizations and here we include nonprofits, government and business all care a lot. They see the ability to do more for lower costs. They’re investing, and they’ve been investing for a long time. In fact, a recent report about financial organizations have been investing a lot of money since 2021, up to recently, and roughly 55% of those projects haven’t been successful. And that doesn’t mean that it’s flawed. We’ve seen this in technology five, six, seven times before. Things like CRM and failover file servers and things like that have really taken a while to go. And part of it isn’t necessarily technology because, for example, customer relationship management, the machine was doing it. We weren’t just ready for it. And I hear that echo, And that’s why, when we look at what’s happening now, it’s not crazy or uncertain.
For those of you that are a little more concerned about organizations and leveraging that, I’d like to give you four key ideas. First of all, easy programming is risky. No code or low code It’s not that secure. If you’re wanting to know where it’s going, do a search AI hype cycle and you’ll get some useful projections. You might go to NIST, the National Institute of Standards and Technology, and type in AI risk management framework and get ideas of how you can improve your protection. And then you could go and type MITRE ATLAS and know the threatscapes that might be there. So here are a whole lot of things that a lot of folks might not know. And I just give that little segment that those are things you might want to look at.
0:18:18 – Kimberly King
That’s great. And thank you. I love that you have that research behind it as well and the things that we can look up on our own. Do we have evidence or things that must be avoided? Do we have that evidence?
0:18:33 – Doctor Frank Appunn
If we look at it, we historically can say bleeding edge implies that you’re bleeding. And I started my business in bleeding edge, helping corporations and government. And with caution you can get there. You need to experiment and test whether it’s worthy, whether it’s myself or my organization. I need to be agile, very agile for AI because it’s going 10 times faster than the internet did. And then I should maybe consider adjusting an old saying and maybe I should say do not trust and then verify. In other words, start with caution and then validate it, because there are huge valuable fruits to be garnered here and used for your personal and organizational benefit. But a little caution please.
0:19:24 – Kimberly King
Yeah right, I’m waiting for the day I can start doing the laundry and the dishes on the outside. I don’t know if it’s going to be that someday.
0:19:33 – Doctor Frank Appunn
Sure. Yes, and you walk up to the toaster, and it says hello, Kimberly, I’ll do it on number seven for you. And your toast comes out perfectly, right.
0:19:43 – Kimberly King
I love that! This is kind of a loaded question, but what might the government do to help protect us?
0:19:51 – Doctor Frank Appunn
So if we look at what the government can do, we want to be a little cautious of just doing their work because it’s so complicated. They need to get broad agreement, they need to codify this into rules and regulations and laws, and that takes a long process. It takes a lot of time and AI is moved, so they’re in a disadvantaged position. But if you recall of what I’ve just mentioned, NIST, MITRE, DHS and CISA can be added. There’s so much good already there that our government has used our money to invest in solutions. I think it’s our need to help them continue doing that and to leverage this, because these are the cautions and the input that we can get to avoid the negative side and get closer to the age of abundance where everybody can benefit.
0:20:48 – Kimberly King
If we then look at that again.
0:20:51 – Doctor Frank Appunn
what we want to look at is be aware of the very latest, but be a little tested and have done some experiment, experimentation for the longer term solutions. Our government can play an important role and, without us realising it, through those organizations I’ve mentioned, we actually have a lot and we just need to leverage that. And, yes, they’ll put some laws in place for privacy and all the other things which is needed.
0:21:19 – Kimberly King
Right, it is. And again to my next question, you talked about that a little bit. So how can we protect ourselves? What do we know now?
0:21:30 – Doctor Frank Appunn
I believe the answer is to avoid the three bad options: euphoric, deny, and hand over myself; and collaborate with AI. I want to look at this as my extra abilities, my value creation for myself, my family, my neighbourhood and my organisation. I want to improve my personal abilities and know that it will change completely much faster than the internet And I need to be agile. As a human, I need to take care, I need to be aware of the bleeding edge, but experience and review that bleeding edge because it will be boring in a year’s time. So that’s a bit of a change to what we used to, but we’ve had this pressure on us for five or ten years already. I think that’s the answer. We can all benefit, but it’s a process and we need to be agile.
0:22:30 – Kimberly King
I’m hearing you loud and clear on that being agile and being cautious. If I’m going to improve my career and my organization, what should I do now?
0:22:39 – Doctor Frank Appunn
Today. You should know about ChatGPT, about Bard, and Bing. I know they’re competitors, but they are the tools and the fundamentals of what we can use. Now I want to experiment, I want to build my human abilities, and that means invest some time. I want to be a little suspicious and cautious because I’m not going to take things at superficial value. And I’m going to make sure that I know whether I should trust this video or YouTube item – Is it real or is it not? – and to take the updates from formal sources, and then I can go forward and plan my benefits. And if I do that well, I’ll save so much time, I’ll be able to invest more and stay with the cycle of improvement and change. I might also want to allocate some time to my community to help others improve.
0:23:44 – Kimberly King
Excellent, and I think that’s I hope that’s what we all do, is you know we have it; we use it for the right purposes, but to be cautious and agile as well, and moving forward. This has been so interesting, Doctor, thank you for your time. If you want more information, you can visit National University’s website nu.edu, and we really look forward to your next visit. Thank you.
0:24:08 – Doctor Frank Appunn
Thank you very much And I hope that you all benefit and take that further and allow us all to improve. It is there for the taking.
0:24:17 – Kimberly King
Excellent, you’ve been listening to the National University podcast. for updates on future or past guests, visit us at nu.edu. You can also follow us on social media. Thanks for listening.
Show Quotables
“Start with caution and then validate it, because there are huge valuable fruits to be garnered [in AI] and used for your personal and organizational benefit. But a little caution please. – Frank Appunn https://shorturl.at/swKXY”
“You need to experiment and test whether [AI is] worthy, whether it’s myself or my organization. I need to be agile, very agile for AI because it’s going 10 times faster than the internet did.” – Frank Appunn https://shorturl.at/swKXY”