Welcome to our 6th installment of the Deliberate Way. We have the world-renowned bioethicist, NYU professor Matthew Liao with us today to talk all things about Ethical Dilemmas in Artificial Intelligence and Emerging Technologies!
0:00- 4:11 Introduction
4:12 – 5:46 What is a Bioethicist?
5:47 – 7:15 Some Big Problems Bioethicists Deal With
7:16 -9:00 Getting Involved as a Bioethicist
9:01 -11:51 AI Synopsis
11:52- 13:50 Supervised vs. Unsupervised
13:51-18:25 AI Prediction Errors
18:26-25:59 Ethics and Safety of Technology
26:00- 30:06 AI Sexual Harassment Scandal – How to Handle It
30:07 -40:42 Audience Questions
40:41-57:04 “What Would You Do” (Speed Round)
57:05 – 1:00:01 Concluding Thoughts
Matthew is a philosopher, bioethicist and author who is internationally known for his work on a variety of fascinating areas that affect our daily lives, including: novel reproductive technologies, neuroethics, and the ethics of artificial intelligence.
During the day, Matthew is the Arthur Zitrin Chair of Bioethics and is the Director of the Center for Bioethics and Affiliated Professor in the Department of Philosophy at New York University. Matthew is also the Editor-in-Chief of the Journal of Moral Philosophy and, in 2019, he was appointed as a Fellow at The Hastings Center,a prestigious bioethics research institute. Matthew’s work has been discussed in, among other places, The Guardian,the BBC,The New York Times,The Atlantic,and Scientific American.
Matthew has more publications than I can count. He’s the author of the book, The Right to Be Loved, and he recently published The Ethics of Artificial Intelligence.
AI Definition & Blackbox Problem
Tumors & AI: Can We Trust Artificial Intelligence in Healthcare?
Human Centered Approach to Using AI
Bioethicist Defining AI: What is Artificial Intelligence?
An Ethical Framework in AI Strategy: OpenAI vs Microsoft
AI Innovation vs Regulation
Memory Erasure: The Technology Might Already Exist
PART 1: Mind Reading in Advertising? 👀 It's Currently Happening!
PART 2: EEGs used for Advertising & Pin codes
PART 3: Ingenious or Dystopian? Companies Using EEGs
Welcome to the deliberate way. I’m Dan Seewald and in today’s episode...
0:00
all right well welcome everyone I’m Dan seawald your host you probably know me
0:07
and welcome to all you deliberate innovators out there today you know it’s a lovely spring day and you know I’ve
0:15
got AI on the mind and you know you might too because you’ve been hearing
0:21
probably a lot about it over the past several months maybe your kids wrote a
0:26
term paper using AI Shame Shame if they did um maybe you’re using it doing your
0:32
copywriting or writing an email or maybe you just were curious and you were
0:38
checking some things out over the past couple weeks we are going to go into the
0:43
deep end talking about AI but not just about AI we’re going to talk about the ethical quandries that are elicited by
0:51
artificial intelligence and I could think of nobody better to be able to discuss this with than my guest here
0:58
today s Matthew Lao or just Matthew Matthew so first of all thank you for
1:04
being here it’s a real treat to have you here and uh quite frankly I mean I can’t think of any better topic to talk about
1:10
at this stage but thanks for being here yeah thank you for having me Dan uh it’s a great pleasure
1:16
well you know Matthew I I gotta do a quick service for you because maybe not
1:22
everybody here knows you I mean I know you Matthew you’re probably one of the most renowned bioethicists that I’ve
1:29
ever met but for those who don’t know Matthew I’m going to give you a quick backdrop about who he is Matthew is not
1:36
only a philosopher not only a bioethicist but he’s also an author he’s internationally known for the variety of
1:43
fascinating work that he’s done that affects our daily lives from reproductive Technologies neuroethics
1:50
and of course AI um during the day Matthew is the Arthur
1:56
zitrin chair of bioethics and the director of the center for bioethics and Affiliated professor and the department
2:03
of philosophy at NYU New York University and Matthew’s also the editor-in-chief
2:09
of the Journal of moral philosophy and in 2019 he was appointed as the fellow at the Hastings Center which if you know
2:16
don’t know what it’s one of the most prestigious bioethics Research Institute so not too shabby and uh you know
2:23
Matthew’s been has had his work discussed in many different places that you might have seen the guardian the BBC
2:29
the New York Times the Atlantic Tucker Carlson but we won’t talk about that
2:35
um and Matthews also had more Publications than I can count which is pretty high
2:41
um he’s got a couple of books the right to be loved he’s cooperated on a bunch of other books and he recently published
2:46
the ethics of artificial intelligence I mean Matthew I could keep going on but
2:52
you know what we got things to cover today so um before I jump with my first
2:58
question for you Matthew a couple of housekeeping notes for folks first off we’re going to start out and I’m going
3:04
to ask Matthew a bunch of questions about AI about bioethics and so on we’re
3:09
going to take a pause so you guys have your say ask your questions in the
3:14
LinkedIn chat feature which you see there so have at it and use that start putting those questions in right now if
3:21
you like we will stop we’ll moderate those questions so Matthew has a chance to directly answer them if we don’t get
3:27
to all of them don’t worry we’ll uh we’ll have a look afterwards and share some thoughts answers responses to you
3:33
as well now one more thing that I’ll just note is that we will have a second part after
3:39
the Q a which is called what would you do all copyrights aside we’re stealing a
3:45
little bit of the the program from ABC where we’re going to ask Matthew what would he do as a bioethicist with some
3:52
interesting ethical dilemmas that you might see coming to an office or a
3:58
household near you so Matthew you ready for that I haven’t told you anything yet but what how you feeling about that uh
4:06
feeling great all right good well we have no zingers but some interesting ones for you
What is a Bioethicist?
4:12
um okay well Matthew let me just start off by asking this question when I first told a couple of friends
4:18
and family members that I’m going to be interviewing one of the most renowned bioethicists they said to me well that’s
4:24
cool but uh what’s a bioethicist and I’ve said to them well uh you know I’ll
4:30
describe it to you but why don’t you wait and I’m gonna ask Matthew to tell you what is a bioethicist and uh so
4:36
maybe you could give me a little bit about what that means and what’s your day-to-day like sure uh so bioethics is
4:43
the study of uh uh life’s ethical issues arising out of life sciences
4:50
um and generally sort of uh technology sort of Health Care Technologies biomedical research and so on and so
4:57
forth uh my day-to-day I’m you know usually when I get up I try to scan the
5:03
news I try to see sort of what’s coming online what are some of the hot hot button issues of the day and then I go
5:11
on then I have a bunch of different projects uh as Dan mentioned right now
5:16
I’m trying to write a book it’s called uh it’s it’s a neural ethics so neural ethics is sort of uh this is uh there
5:23
are a bunch of Novel uh emerging Technologies on the brain and
5:29
um and I’m sort of right now writing a book on that uh and then I’ll also prepare my teaching so I’m you know also
5:37
teaching a course on neuroethics right now A grad seminar I taught it last night and
5:42
um uh and so that’s roughly what my day’s like yeah I I gotta ask you’ll go
Some Big Problems Bioethicists Deal With
5:48
in a little bit deeper you mentioned there’s some hot button issues I I feel like there’s just I I wish we had like
5:54
five hours to do this because there’s so many topics I want to talk to you about but um give me a a quick gallery view
6:00
what are some of the things that come up often that people are tearing their hair out about
6:06
sure so I mean you know I I you know you already mentioned one of them is going
6:12
to be AI sort of artificial intelligence uh I recently uh published a edit
6:17
collection on the ethics of AI and that topic has just got it has just really exploded so you know if you look at
6:24
Chachi PT everybody’s using it uh and you uh dense mentioned that you know
6:31
um there are all sorts of implications so at my school for example were worried about whether students are going to use
6:38
this on exams and plagiarism uh I’ve seen uh people submitting essays uh and
6:45
you know where they use chat GPT uh you know to my journal and um so you know
6:52
these issues are already cropping up and the technology is only a couple months old uh and so uh there are even people
7:00
who are calling for moratorium on sort of these uh these Technologies so that’s
7:06
definitely one hot button topic and yeah I can go on and list a bunch of other
7:11
ones but uh we’ll drill we’re gonna drill into some of these for sure and I
Getting Involved as a Bioethicist
7:16
I so let me go just a little deeper just about um kind of Bio ethics and and some of
7:22
the challenges so when you think about like how you got involved as a bioethicist like how does one become a
7:28
bioethicist and what kind of triggered you to to get involved I’m curious personally how do you end up in this
7:35
station in life yeah uh great so biotics you can actually do it in a bunch of different
7:41
ways uh the way I got into it was through uh philosophy uh so I was I was
7:49
doing a PhD in philosophy in England at Oxford uh and I was doing something called moral philosophy and just
7:55
thinking about ethical issues generally uh and there I discovered that there
8:02
were a lot of biomedical issues things relating to abortion you know
8:07
reproductive Technologies and so on and so forth and I discovered this whole
8:12
area of bioethics uh where people were interested in you know these new new
8:19
technology so things like end of life how do we decide uh when to let a loved one go uh and so on and so forth so
8:27
there were a lot of complex ethical issues that I encountered and I just you know thought hey I want to spend my life
8:34
doing you know thinking about these very these questions that are really important to a lot of people and it
8:41
seems like a worthwhile Endeavor uh you know to be working on these topics and it keeps changing it really seems like
8:47
it keeps changing I remember you know end of life issues it was top of mind for so long but now there’s a lot of
8:54
other sort of ethical dilemmas Reproductive Rights continues to be very very top of mind but we’ll we’ll come to
AI Synopsis
9:01
that in a moment but I want to switch back to AI um you know personally I’m gonna
9:08
probably estimate that I read about five to six articles every day I’m not even
9:13
exaggerating on artificial intelligence some of them are total garbage and some are really really insightful I won’t say
9:19
who the garbage ones are but um everybody has an opinion which makes it really intriguing but let me start back
9:27
from Ground Zero if you will um how do you define AI and we hear about generative AI narrow AI there must
9:35
be broad AI if there’s narrow AI well can you give me a quick synopsis or give folks a synopsis and then let’s go
9:41
deeper on it sure uh so hey there’s really no generally degree upon
9:47
definition of what AI is roughly I take it to mean something like getting
9:52
machines to do like sort of engaging processes like thinking and reasoning
9:58
that sort of you know if it you know that we do that humans do right uh and they’re sort of different forms of AI so
10:05
the old forms of AI it’s called symbolic Ai and that’s just a bunch of um you
10:10
know you basically say if something happens then something happens you know
10:16
um and that’s you know using symbols uh and things like that the new AI is
10:21
something called machine learning and machine learning basically learns on its own you get get the algorithms to learn
10:28
on its own and there and they’re different types very quickly there’s something called supervised learning uh
10:34
where you supervise that is exactly what it means so you train the algorithm on
10:40
certain data and you tell the algorithm which data is correct and then it learns
10:45
by you telling it and then there’s something called unsupervised learning where uh the algorithm just is able to
10:53
sort the different data into different piles on its own um and what’s really interesting is now
10:58
there’s something called Deep learning and deep learning just basically involves uh this sort of sorting on its
11:05
own but in a very complex way there’s a huge network of nodes
11:10
um and because of that uh because it you know there there could be millions and
11:16
billions of nodes uh where it’s trying to figure out different things that’s based on sort of math and algorithm and
11:22
because of that it creates something called a black box problem where even the engineers who are trying to uh code
11:31
the thing they’ll know what’s going on so it’s unlike the symbolic AI where the engineers know exactly what’s going on
11:37
because they put in that if then you know throughout the whole codes uh in this case the algorithm is doing that
11:44
and so the the you know even Engineers don’t know uh the you know what will happen uh with uh the algorithm so that
11:51
notion of supervised versus unsupervised definitely gives me a little bit of
Supervised vs. Unsupervised
11:56
pause just the idea so somebody could effectively kind of decide what’s out of
12:01
scope what’s in scope we’re not going to have anything about brown dogs in in the
12:07
learning of the AI so brown dogs will never exist um in the learning process am I
12:12
overstating that or yeah no that’s right that’s right I mean the supervised learning is more like uh
12:20
here’s a picture of a brown dog and the the uh let’s say the algorithm thought
12:25
that was a cat and then you would then tell it no this is actually a dog and the algorithm was would learn from what
12:32
you tell it that oh it’s a dog uh and the unsupervised one is where it’s just
12:37
you got a bunch of dogs and cats and they’ll just start to render you know sort of using uh mathematical ways to
12:44
figure out which ones are dogs and which ones like cats based on their features and what and the Deep learning is
12:50
basically they’ll go into the pixels and kind of start to figure out different pixel values and come up with a
12:57
probability estimate of like whether it’s more likely to be a dog or a cat
13:03
um and let me just add one other thing yeah please really interesting is that
13:08
they apply the same uh same idea to language and this is why we have these
13:15
natural language processing like chat GPT so they sort of use the these uh you
13:21
know sort of just uh based on uh the you know all the different data uh the
13:27
computer can kind of predict which word is likely to happen you know come up next say you put in the string the you
13:34
know the the brown dog jump over the you know and then it’s gonna kind of predict offense you know or something like that
13:42
um and so uh and all that is based on algorithms and they kind of just learned and learn on its own uh based on the
13:48
data that it sees so I I recall reading an article like maybe a couple weeks ago
AI Prediction Errors
13:54
that there were some prediction errors with uh with um uh with with certain
14:00
like tumor predicting whether someone has a tumor and they saw that the tumors there was a picture of of a of a ruler
14:07
and a thumb uh next to it so it started to predict that wherever there was a
14:13
ruler and a measurement that there could be a tumor and that’s maybe that’s the Early Learning sort of you know
14:18
difficulties and we’ll get beyond that but that’s what seemed to give quite a few people pause of boy there’s they’re
14:25
making these kind of very basic errors that a human would never make but is is that um is that sort of just like early
14:31
learning problems that will be sorted out or is that potentially a long-term dilemma that we might see that’s
14:38
actually a long-term dilemma it’s a very serious problem it’s called a generative adversarial attack and what’s happening
14:47
is that you know current AIS at least with machine learning it does something called associative learning so it’s kind
14:54
of you know it’s what I was saying earlier it’s probabilistic it’s just trying to figure out what it’s likely to
14:59
be but he doesn’t really know what a dog is or what a cat is it just sort of uses
15:04
a bunch of pixel values and you know if there are enough sort of tales or sort
15:09
of no shapes then it sort of says it’s a dog but it doesn’t know what a dog is and so the problem with that is you can
15:16
trick it just like the way you’re saying and so sometimes they’ll maybe the way it figures out it’s a dog is because of
15:24
you know um you know some other features that are completely irrelevant and so they’re
15:29
evidence where there’s something called the one pixel attack where you can take a picture you can take a you know sort
15:36
of uh let’s say there’s a picture of a panda and you just take one pixel away and it’ll uh the AI will think that it’s
15:44
a given with 99 like sort of confidence that it’s a given right and that’s very
15:51
problematic in the case of things like cancer right because you want this AI to
15:57
kind of know what it’s doing and if it’s sort of if it can be tricked so easily then we have to really worry about
16:03
whether we should use it in healthcare or use it in self-driving cars or use it
16:09
in weapon Technologies and so on and so forth and that was I presume that’s that perhaps one of the issues I guess a few
16:16
years ago even but perhaps recently where um the Tesla was having the issue of not
16:22
recognizing pedestrians um and uh in crosswalks and there is is
16:28
so presumably that that was kind of the same type of issue if I exactly because it wasn’t trained on enough data so
16:34
there was one case where uh one guy there was this guy who was walking
16:40
across the street with a bike uh he was walking the bike and there was just
16:46
wasn’t enough training data set of people walking the bike I mean there are plenty of data sets of maybe people on
16:51
the bike biking you know alongside uh cars but if you don’t have enough data
16:57
set the algorithm can recognize that you know here’s a different scenario because
17:02
it doesn’t understand what a bike is and it just ran over you know like it caused an accident interest it sounds like it almost can
17:09
reduce as many things to the common denominators and if something’s more exotic we’re not seen as often that it
17:15
may end up falling it almost goes extinct as far as the AI is concerned maybe I’m overstating that though yeah
17:21
no and so right now a lot of AI researchers are trying to do something
17:26
called One-Shot learning where they can because we can just learn we could you know like once we learn what a bike is
17:32
that we can figure out sort of different things about a bike including just a person walking with a bike right
17:39
um and um you know right now there are attempts to try to make algorithms smarter but we
17:46
might you know like I you know I said that this was a long-term problem because this associative learning uh
17:53
fundamentally doesn’t learn about like causal relationships um and so a lot of uh AI researchers are
18:02
saying now that we need the next uh Frontier for AIS for it to learn causal
18:07
relationship so we need to have some sort of causal learning process for AI you know I have to say Matthew for
18:13
someone who is a philosopher in Ephesus you sound like you have a pretty good handle on the technology but maybe um so
18:20
well let me bring it back to a few ethical technology questions for you
18:25
um so first things first something that recently that popped in the news about uh maybe was a week or so ago that the
Ethics and Safety of Technology
18:32
Commerce Department um asked for public input which I thought was interesting and they said they want to make AI systems I’m going
18:39
to read this here legal effective ethical safe and otherwise trustworthy not too ambitious
18:46
um the department said they’re seeking feedback of audits or assessments that should be required before any company
18:52
brings out new AI tools and it’s I know that’s not associated with that moratorium
18:58
um sort of request from those 1100 experts and technologists it feels like
19:03
this is like an early Overture for regulatory oversight
19:09
what’s your reaction from an ethical standpoint but also even a pragmatic
19:14
standpoint what does that mean is that important is that is it an overreach well what’s your your reaction to that
19:22
um so I think it’s a great idea so they’re you know they’re different schools of thought so there’s something
19:28
called the West Coast uh sort of theory where uh people think you know just uh
19:34
we should be laissez-faire let uh Market forces you know no regulations and then
19:40
when there’s a problem we’ll fix it right and that approach has gotten us Tesla cars that have hit people you know
19:47
sort of uh just run over people um and things like that and I think
19:52
there’s a different approaches to the east coast approach where it’s more regulatory it’s kind of like the FDA the
19:58
way we regulate medicine uh you know they’re a bunch of they are institutional review boards where you uh
20:06
you know from the beginning of the research to all the way to its deployment their you know people looking
20:12
over to make sure that it’s safe for people um and I think that uh that approach
20:18
seems like a really good idea especially actually if the technology is going to be used in for example Healthcare right
20:25
when it’s dealing with human subjects you definitely want to make sure you have that kind of you know oversight
20:31
because otherwise you could harm a lot of people now recently I posted
20:36
something and uh and a Gentleman who’s a very very knowledgeable and involved in
20:42
the space of artificial intelligence said we have to stop worrying and try to put our foot on the brake I’m going to
20:48
paraphrase we can’t put our foot on the brake we have to keep letting progress moving forward and it’s a mistake
20:55
um or it may be ill-advised to just sit here and worry and discuss and debate in a circular fashion
21:01
um how do you react to that is that is he more about progress is that should we
21:06
just be pushing forward more in the west coast philosophy yeah your thoughts about that yeah so there are people who
21:14
I uh I call them sort of techno optimists they think hey you know technology is going to solve all the
21:20
ills of the world world and we should just you know full steam ahead and they’re also techno pessimists who think
21:26
you know the you know the world’s gonna end we’ve seen some of that the people who you know the moratorium is is kind
21:33
of you know the six months moratorium is kind of uh fueled by people who think you know this is going to kill us all
21:38
right I think that um I’m more of a techno realist and what
21:44
that means is you know I think we need to you know these are technologies that can benefit us but they can also harm us
21:50
so we need to be much more deliberative we need to regulate them and you know we
21:56
need to figure out a good path forward where we can have both Innovation and uh
22:02
you know safety and sort of like uh and we’ve done that in other areas so take uh new drugs novel discoveries of drugs
22:10
we’ve been able to do that we have processes for that and some of that those processes can be ported over uh to
22:17
this area as well so I’m not as pessimistic as the technopessimist and
22:23
but I’m also I also don’t think that we should just sort of falsely my head and just no regulation and bettered access
22:30
um and so on and so forth I feel like the Techno realist must live somewhere in the midwest like Illinois
22:36
somewhere in the middle of the country I like that term a techno-realist um a couple of things I’m just going to
22:42
note for folks who are listening in and chopping at the bed they’re thinking I have a lot of questions Dan’s not
22:48
getting to the ones I want to hear put those questions and chop them in we’re going to get to them in just a just a
22:54
handful of minutes but I’m gonna kind of uh kind of control the mic for a little bit longer because I have a lot more to
22:59
ask Matthew but I promise you will have a chance to have your say um coming to this point about being a
23:05
techno realist I’d love to hear your thoughts if you are a mid-sized to a
23:10
larger company and you’ve just been tasked with coming up with an AI strategy or coming up with an approach
23:18
with um with a you know a new technology in your organization and you’re not sure how to approach it where does the the
23:26
ethical framework fit in does it come in at the very end like we did all the work let’s now pressure test it it’s at the
23:32
very beginning is it all throughout and what type of practices might you suggest for kind of an everyday guy like me who
23:39
might be asked to do this yeah um so I’ll run a couple boards uh of
23:44
like companies where I help them think about ethics and one of the things that I try to say is try to think about
23:50
ethics right from the get-go um because you know uh otherwise you
23:56
might end up spending uh millions of dollars or even billions and then find out that you know it’s not going to meet
24:03
ethical requirements then you have to you know sort of throw away the whole project it’s much better to kind of be
24:09
thinking about those issues from the issues from the start and I’ll just give an example where uh thinking about the
24:15
ethics can really help a company so uh you know right now we’ve been talking about gpts
24:23
successful but I think one of the reasons why open AI has been so successful is because it was thinking
24:29
about ethics from the start and why do I say that so if you just think about a
24:35
natural language models are have they they have existed and many many companies have uh had it
24:43
um so if you remember Microsoft a couple years ago they had a chatbot called Hey
24:48
and that you know they when they released it uh um you know after a you
24:54
know a couple days it was sort of spewing out all these racist things uh and so they had to shut it down
25:01
um and um and so I think that what openai did was different which is they uh first of
25:09
all they were thinking about they were thinking about racist sexual language inappropriate language inappropriate
25:14
actions and so on and so forth and you can just tell that they created modules where when people ask inappropriate
25:21
things or unethical things it got sort of they blocked it right and so it was a
25:27
team that was thinking about these different types of things um and that’s why I think they were able
25:33
to get such a you know sort of uh huge uptake uh by the public because you know
25:39
they were able to kind of make sure that the chatbots stay within the confines of Ethics yeah it is interesting not that
25:46
I’ve tried putting anything inappropriate in chat GPT but I have heard rumor that if you ask
25:52
inappropriate questions it’ll kind of push things out of bounds which is which is really interesting
25:59
um kind of a little bit uh connected to this one of the stories that hit the
AI Sexual Harassment Scandal – How to Handle It
26:04
news wires about a week or two ago also I saw it in USA Today Was about a sexual
26:11
harassment or a purported sexual harassment Scandal a a gentleman who’s a
26:16
law professor at George Washington um was accused of a of a of you know um
26:22
you know inappropriately touching somebody on a trip to Alaska and it hit the news and he had never been to Alaska
26:28
he had never been on a student trip he didn’t there the article that was referenced didn’t actually exist in
26:34
reference to Washington Post article so he wasn’t particularly amused by this to say the least but it really kind of for
26:42
me brought an important question is can you hold open AI accountable for what a
26:47
chatbot says I mean they’re going to say look look we we trained it it’s an unfortunate error
26:52
um you know how do you handle these situations from both illegal and an ethical standpoint because there will be
26:58
more of these and there’s more stories for sure that I’m not reciting how do they handle it
27:04
yeah that’s a great question so uh you know the chat Bots uh Chachi PT and gpd4
27:10
and other Technologies like it they they’re sort of you know as I was saying it’s sort of it’s It’s generative AI so
27:17
it’s trying to generate uh it’s trying to predict what the next word will be um and so it doesn’t it doesn’t Hue to
27:24
reality uh you know sort of it’s just sort of thinks oh the you know probably this next word will be X right
27:32
um and so it’s gonna make up a lot of things and that’s one of the challenges
27:38
so you know between chat gpt3 and gpt4 gpt4 already has uh less hallucination
27:46
so it doesn’t you know that’s a technical term you know they there’s actually uh a setting there where you
27:52
can kind of uh increase hallucination if you wanna like it’s a right poetry or something like that or reduce it you
27:59
know decrease it so that it’s more factual um and so um and either but fundamentally it’s not
28:07
gonna you know it it’s it’s still sort of generative AI so it’s gonna make up
28:13
stuff um and so what should should we hold open AI accountable it’s hard to say so
28:19
like take defamation for example like just I’m not a lawyer but as far as I
28:24
you know I understand defamation requires that you intentionally uh say something false about somebody it’s not
28:31
clear that um chat GPT has intentions uh that it’s potentially doing this right so and then
28:38
that open AI intentionally put that information in about this professor and
28:43
so on and so forth so probably legally it’s you know you probably couldn’t get
28:49
like it probably wouldn’t like fly to sort of have like a defamation lawsuits
28:55
um should we hold open AI accountable I think they want to hold themselves accountable right so they it’s in their
29:01
interest to make sure that when the information like that about real people are out it’s as accurate as possible and
29:08
that’s sort of I take it that’s the goal of chat GPT 5 is to you know have even fewer hallucinations
29:16
um and it’s come up it’s not just sort of like these cases but also um citations that they’re sort of you
29:22
put in people they’ll say hey so and so published this paper and it’s totally not true it’s completely fabricated
29:28
right um and um so that that problem is going to persist as long as we use uh
29:35
something like uh this sort of machine learning that I was talking about where it’s just kind of learning on its own
29:41
and it doesn’t really understand you know uh who Dan is or who who I am and
29:47
you know and things like that so yeah very very interesting I I love the term hallucinating it’s got a very
29:55
interesting kind of illusionary reference to that but but if you want to hallucinate more we can give something
30:02
to the machine to see how it handles it but um awesome I I I’m going to turn to some
Audience Questions
30:07
questions that have come up and I’ll just note there are some questions that were sent direct to me
30:12
um so thank you for sending those but there are also some that have been put in LinkedIn I’ll start with one of the
30:18
first ones that was inserted into to LinkedIn um the question was can you explain how
30:23
a single Pixel can accomplish what you described is it a specific location on
30:29
or of the pixel so maybe a little technical question we’ll uh we’ll test your technical Acumen out Matthew sure
30:36
so the single Pixel attack is just uh they figured out that
30:41
um it’s not as it’s not a single Pixel in a specific location on a picture so
30:47
if you were trying to do this at home it’s not quite like that but they were able to figure out that
30:53
um um the algorithm seems to be finding certain pixels more Salient in their way
31:01
of predicting whether something is something like whether is a picture of a panda right
31:07
and such that if they were to remove that pixel then all of a sudden their
31:12
confidence in this being a panda drops right so and it’s really weird because if you just look at that pixel it’s just
31:19
like a black dot or a white dot for you like the human eyes uh and but for the
31:25
algorithm it seems to be so important and then there let me just mention that there’s a another one where they did
31:32
something like uh felt like a filter thing where um uh they took about 400 pixels uh just
31:40
randomly from a picture just kind of they just changed it but for a human eye
31:46
so if you look at two pictures like it’s basically you see two picture two pandas
31:51
right um and they look exactly the same you can’t even see it it’s like 400 pixels out of a million they kind of alter the
31:58
values right so to a human eye you can’t see anything but again they can get the computer the algorithm classify it
32:05
wrongly just based on just putting this filter in and what it’s you know again
32:11
the idea is the same is like uh because of the deep learning it’s looking at
32:17
different things kind of like the example that I used then is looking at and it’s finding things like oh there’s
32:22
a ruler therefore it’s cancer you know that type of thing uh whereas we of course wouldn’t think ruler therefore
32:29
cancer but that’s what the algorithm is doing um so I I’ll uh I’ll follow up the one
32:36
of the the questions that have been asked me earlier I’m going to paraphrase it a bit um is about academic dishonesty and AI
32:44
um my oldest daughter is a student at uh Indiana University at the Kelly school and um one of the things that they were
32:51
already grappling with and asking students to try to solve is how do we ensure that students don’t abuse
32:58
um Chachi PT to write their term papers to answer their questions for essays it completely denudes the value of kind of
33:04
academic integrity and I I wonder um what what may be some of the things
33:11
that that have already been proposed to be able to reduce that it’s you want to use the tool that’s incredibly powerful
33:18
it’s sort of like ignoring a calculator and saying nope still got to use your Abacus um because it’s cheating how do you
33:24
evolve with the technology knowing that students and professors also will be using it
33:30
yeah so I think the one of the things I do in my classrooms is to kind of
33:36
explain why pleasure uh plagiarism is bad for for the student right and so uh
33:43
NYU is a very expensive institution it’s a private school so I sort of I tell them you know you’re spending all this
33:49
money and then you’re getting the computer to write your essays like the essay is your opportunity to really
33:54
develop your thinking as a you know your thoughts and your ideas as a thinker
34:00
right and if you just use chat GPT you’re depriving yourself of that you
34:05
know opportunity you might as well just save your money you know and and not do that right
34:11
um and this is sort of a rare opportunity where you can kind of uh have this you know this time to think
34:17
about these ideas develop them and then really work them through the way to use Chachi PT is maybe it can help you uh
34:24
generate some ideas like let’s say you know like you’re you’re having a writer’s block and and you know but
34:32
don’t use it to write the whole essay like sort of get some ideas and then from there on like start writing
34:39
yourself like you know develop your own thoughts the problem with Chachi PT is once you start to replace the logical
34:45
reasoning the critical thinking part then you you’re really depriving yourself of something very important
34:51
I I’ll just quick follow up on that is uh you know my hypothesis has always
34:57
been that our our memory is is being deprived when we overly rely on search
35:02
engines and there is some research of the being some hippocampal effects
35:08
um but but moving to this next step is there a potential risk that it could
35:14
actually affect our critical reasonability long term if we end up relying primarily on chat GPT or other
35:20
AI Technologies or am I overreaching in my uh my my hypothesis
35:25
yeah no I don’t I don’t think so I don’t think you’re reaching at all I do think that if you
35:31
um just use chat TPT to do the thinking for you then you know I think the thinking is a muscle and uh you know
35:39
this is what I tell my students um uh you know you have to practice you have to keep working at it
35:45
um and the more you do the better you get at doing it and being able to recognize different problems so you can
35:52
go to think through them being able to break them down all this analysis they’re very important for whether
35:57
you’re in the Academia or in business and so on and so forth um and you lose that ability you know if
36:03
you don’t use it and so I I do think that that’s a real danger and so it’s something that we want to be care like
36:10
we want to be aware of yeah and so somebody else asked another question going I guess people enjoy the the
36:17
reference to hallucinating um for a variety of reasons um the hallucination problem is fascinating as
36:24
I said it seems that it can be used for beneficial purposes but where is the
36:29
line and where could it be harmful yeah that’s a great question so yeah I think
36:35
I’ve mentioned where it might be beneficial if you’re writing fiction where you know facts don’t matter as
36:41
much or poetry or things like that then a bit of hallucination could be good right sort of opens up creativity it’s
36:50
like algorithms on LSD or something like that right but
36:56
um if you need factual things and uh it and this the AI is hallucinating that
37:02
you could uh produce things that are just factually wrong and those could
37:08
have consequences right um yeah yeah like as in the case that you’re talking about uh with respect to
37:14
sexual harassment right you could end up accusing somebody uh as you know like spreading rumors that also and so
37:21
engaged in sexual harassment when it did not happen at all you know so I had one
37:27
more question then we’re gonna go to our final round it’s I mean time’s flying but I have to ask this one somebody had
37:34
asked this question before we even did this and they they emailed it over to me when they said they were uh they’re
37:40
gonna be attending and uh they asked a question about China specifically so
37:45
there are a lot of Dimensions with China but one of the things that was brought up and I’m going to read this back to
37:50
their email they sent me which was um that China’s proposed new rules to ensure generative AI sticks to socialist
37:58
values um they you know after Alibaba rolled out chat GPT that came out from the
38:05
government what does that mean how do you get AI to stick to specific values I
38:11
mean you talked about supervise versus unsupervised but there’s implications of of the overly controlling will China be
38:20
able to do that and could anybody do that could you know a right-wing movement say we’re not going to hear
38:25
about anything you know that references Democrats or or you know Progressive
38:31
politics can you actually control for certain ethical Norms yeah I think uh if
38:37
China were to try to do that I uh it would face a dilemma so either it so you
38:43
know algorithms are trained on data so if you just restrict uh the data set to
38:49
socialist values say right um then it’s uh maybe you can create an
38:55
algorithm that does that but it’s not going to be very useful because there’s so many other things out in the world
39:00
and if you just if you’re trying to you know restrict it only to that then you can learn a bunch of like basically you
39:08
know what the rest of the world is doing you know uh but if you open it up then
39:14
it’s gonna learn a bunch of other values right and it’s not gonna it’s gonna be very hard for uh it you know sort of
39:21
even the engineers because of the Black Box problem even the engineers to figure out hey which Park is the Socialist one
39:28
and which part is the non-socialism because there are going to be billions of nodes in there and it’s almost
39:35
impossible like once you open it up and so uh maybe it comes in degrees maybe
39:40
they can have a bit more socialist value but they’re not going to be able to be able to eliminate it once they open it
39:46
up so it sounds like chat GPT in general yeah this is a significant Dilemma to
39:52
any authoritarian regime that’s trying to control information no exactly exactly it’s gonna just uh uh
39:59
and then there’s also the user inputs right so chat Chief PD is also learning
40:05
from when users put in things so let’s say you create something with socialist
40:10
values and it’s completely isolated and contained but when users start to use it
40:16
and they put in non-socialist values it’s going to learn other things unless
40:22
you again restrict it like don’t learn from the users but then it becomes very not very useful so interesting yeah all
40:29
right well thank you for that that Insight on that because that that’s a complex issue I’m interested to see how it unravels over the next the next six
40:36
months to a year or more we’ll have to do another session Matthew so we can find out what happens next which reminds
40:43
me we do have a part two of our of our session here together and part two I’ve
40:49
stolen from ABC it’s called what would you do now if you’ve seen the show before I think Jonathan Quinones this is
40:56
the host or he’s been one of the hosts and the idea behind it is you’re presented a moral dilemma and in that
41:03
moral dilemma you have to decide in the moment what would you do and uh I’ve
41:08
concocted several sort of scenarios based on real things that that I’ve come
41:14
across I haven’t shared it with Matthew so he hasn’t doesn’t have a scripture written he’s going to refer to it and
41:20
I’m going to ask you Matthew if you don’t mind to give it like maybe a minute half a minute or so reflection
41:26
and and uh and an answer what would you do or how would you approach it so
41:31
you’re ready to do it let’s go let’s do it all right I’ll give you one hint the
41:37
first thing we’re going to start with is workplace related so I have a few scenarios related to the workplace which
41:42
a lot of folks are listening they work day jobs they work in companies and they maybe or have already experienced these
41:49
so the first one I have your company requires that you have to
41:54
get vaccinated for the next pandemic strength whatever that may be
41:59
um but an employee decides to refuse to be vaccinated due to personal beliefs or medical reasons you are that HR person
42:08
or an organization person responsible for this program what do you do what
42:13
would you do if that situation so it depends on the pandemic it depends
42:19
on the virus um and it all so I and then it depends on the medical uh exception or the
42:26
personal belief so if um if it’s a valid medical reason and
42:31
let’s say the personal belief is a religious one uh I think typically they’re religious exemptions
42:37
um you know that will be valid um but you know it it also depends on
42:42
the virus if the virus is very virulent and can kill a lot of people uh and the
42:48
vaccine can kind of stop it on its track then you know you might think that uh
42:55
even if you want to accommodate the religious exemption maybe you require the person to work from home uh luckily
43:01
now we can all use zoom and things like that so maybe it’s a bit easier you know in this day and age uh for that type of
43:08
accommodation uh but certainly uh you should allow some accommodation
43:13
depending on the reasons but at the same time you have have to make sure that
43:18
it’s safe for the workers the people who do come in so that’s you know it’s part
43:24
of your responsibility to everybody in the company so I’m gonna guess you probably have grappled with this one a
43:30
few times over the past couple of years but uh all right well I’ve got another one of Let’s we’re gonna bring the
43:36
stakes up even higher Matthews so something that people are grappling with right now
43:41
um you’re hired by a company during covid and it was your understanding when you were hired I was a mutual owner
43:48
saying that your job would be a hundred percent virtual all on Zoom but the CEO
43:54
had a change of heart evil-hearted person that he remembers and they
43:59
decided all employees have to come to the office at least three days a week
44:04
now what would you do if you are working for that company uh or advising that company
44:12
let’s say yeah yeah so that’s a great question so this
44:17
is a bit of a legal issue it depends on what’s in your contract right if your contract says specifically that you
44:23
don’t have to come in at all I think you’ll have a legal basis uh but uh
44:28
independent of that it depends like you know if everybody’s coming in um and you’re you know let’s say the
44:35
offices in New York and you’re still working from Florida right you know it might become a real hassle for you you
44:41
know sort of where everybody’s coming in you’re losing that Social Capital right uh and you’re not getting the most out
44:49
of your job if everybody’s coming in uh and things like that and so at some point maybe you need to think about
44:56
getting a new job you know just because uh uh you know the pandemic was a very
45:04
exceptional you know situation and you can see why doing you know like during
45:10
the pandemic you know people would want to be trying to be maximally flexible but now that it’s uh sort of we’re on
45:17
the tail end of it uh you can see why policies might change yeah and is there
45:23
any ethical quandary that that CEO or leadership team would face in doing an about face as a hire people or how how
45:31
would you view the evaluating that decision if you were a company or the CEO yeah they are right now yeah that’s
45:38
a great question it depends on what was communicated if the CL says look uh we’re hiring you it’s going to be 100
45:46
virtual now and forever right uh then that’s very categorical yeah that will
45:53
be lying right the Seattle would have lied with a Backtrack on uh his or her words and that would be you know even if
45:59
there’s no that would be unethical even if there’s no it wasn’t written in contract right now if the CL was said
46:07
something like hey we’re during the pandemic so you can kind of work uh virtual uh for now uh so now that that’s
46:15
a conditional right so it assumes that you know once the pandemic’s over we might change our policies right so it
46:23
depends us on specifically what was communicated I think words matter words
46:28
really matter all right I have a another scenario a little bit further out uh
46:34
Into The Ether but not so far out um a new robotic technology that your
46:40
company is testing um has been shown to be able to eliminate thousands of operators at your
46:46
company and you personally have been asked to conduct the test that do the
46:51
final comparison of Workforce to the robots um and you decide that you don’t want to
46:57
do that test what would you do if you are that person being asked to do that analysis is there any ethical quandary
47:05
of effectively doing an analysis or study that’s going to potentially eliminate all these jobs
47:11
yeah so I think uh companies have an obligation to their workers so uh to you
47:18
know if you’re gonna lay people off I think uh ethically speaking at least you should kind of try to give them a heads
47:24
up right uh you know if you foresee that it’s inevitable that you’re going to be
47:30
deploying this technology It’s Gonna Save the company millions of dollars and so on and so forth You’re Gonna Want to
47:36
give them a heads up so that they can uh find new jobs you maybe try to help them uh acquire new skills and so on and so
47:44
forth I know companies don’t often do that for cost reasons and things like that but I think uh uh you know you know
47:52
especially with these workers who have done a lot of work for you I feel like uh it’s it would be the right thing to
47:58
do for companies to uh uh help these workers transition to a different uh
48:05
employment yeah all right I’m gonna take you out that word now if there are a lot of workplace stuff I’m gonna shift we
48:11
got few more scenarios for you you’ve done a lot of work in the space of
48:16
Neuroscience and we’ve only just briefly alluded to it so there is an area that
48:22
uh you’re probably familiar with bcis or brain computer interfaces which maybe
48:27
I’ll ask you even describe what that is um to to the group to the group here um let me give you the scenario that
48:34
maybe you can backtrack and share a little perspective a company’s developing a new campaign for a product
48:40
and their research partner has let them know that they have the ability to use
48:45
BCI to measure consumer responses to advertisements and taglines and such to
48:51
develop better campaigns is it morally ethical and what would you do if a
48:57
research partner said I can read other people’s minds to help you develop a better marketing campaign so let me
49:03
pause what is BCI and what would you do if you presented that yeah great question so bcis are sort of
49:11
brain computer interface uh for short and um there are different versions of it
49:18
there’s something there’s you know from the very non-invasive to the very invasive so the non-invasive ones are
49:24
things like EEG so Electro and allography uh where you kind of put it
49:29
on your scalp and it kind of measures uh things from your scalp to something
49:34
where you opened up it’s like at the surface uh of your scalp to something
49:39
called deep brain stimulation where you insert an electro into your brain uh deep into your scalp and in fact we even
49:47
use the brain stimulation for conditions like Parkinson’s depression and so on
49:52
and so forth so they’re about a hundred thousand people who have used that uh and the U.S government for example is
49:58
really interested in uh DPS so deep brain stimulation because there are a lot of soldiers who are coming back uh
50:05
from war with post-traumatic stress disorder and there’s an idea idea that maybe PTSD can be you can use DBS to
50:13
sort of affect the amygdala so the theory is that PTSD is the result of
50:18
hyperactivity in the amygdala which is your it’s sort of the emotional Center of your brain and there’s like excessive
50:24
emotions from the trauma and so if you disrupt that you can kind of affect PTSD now back to your question Dan so uh
50:32
there’s actually they’re actually all these devices EG devices that people are already deploying where they
50:40
um you know precisely for advertising reasons so Ikea for example as a headset where uh
50:47
you know there’s like research groups where they put this headset on and they get them to walk through sort of like
50:52
sort of look at different Ikea Furnitures and things like that and it’s for advertising purposes exactly what
50:58
you’re talking about uh and you know they monitor sort of like they can monitor different types of waves in the
51:04
brain and so on and so forth um can they read your mind um AG is kind of not very accurate but
51:12
it can do something like reading your mind already so there’s there’s a there was actually a study that showed that
51:19
you can easy use EEG to figure out someone’s PIN codes like potentially use you know uh you know so they’re getting
51:27
them to play a game and then sort of using EG they flash different numbers like one two three four five six seven
51:32
eight nine zero right and then on the basis of that certain numbers will show
51:37
up more strongly on EG and the idea being that if it shows up more strongly it’s probably one of the codes that you
51:45
are associated with like that you care more about right so if your PIN code is
51:50
zero nine seven eight these are not my PIN codes by the way but you know like then it shows up you know more strongly
51:57
on the the brain and so they they can already do that they’re they’re so it’s it’s very uh premature but it’s it’s
52:03
sort of on the way like the research has been done in that area and it sounds like there’s a need for ethical Frameworks around this because it seems
52:10
like it’d be very easily abused yeah so uh Bears well you know we talked about
52:16
China and um like different places they’re uh imagine just work workforces
52:22
requiring you to wear EG’s and they’re monitoring your uh productivity are you
52:28
surfing the internet are you paying attention and so on and so forth right uh they uh another thing that’s been
52:35
deployed right now is um sort of truck drivers right are they falling asleep right you can actually uh
52:43
you know use these devices to track REM you know sleep cycle and things like that and so uh this is something that we
52:51
need to start to think about and really think about human rights and sort of you
52:56
know workers rights and things like that you know is this too invasive you know because before the brain at least was
53:02
one area where it’s shielded from like like sort of intrusion and sort of
53:07
interference but now companies can actually get into it and so we need to
53:13
you know like that that’s that’s actually something I’m writing about uh in my book on the future brain so I’m
53:20
going to ask one more brain related scenario for you we’re we’re running short on time but I I’d be remiss if I
53:27
don’t ask this question um let’s say imagine there’s a dating website that’s proposed a new way for
53:34
people to get over their past bad relationships experiences
53:40
um so you could try to meet Mr Right or Mrs Wright whoever you’re after um and they offer free Erasure of really
53:48
bad date experiences what would you do if they had proposed that as an investor to you is that is
53:55
there is this an ethical overreach is this an ethical compromise to be able to even to give people the option to erase
54:03
bad memories yeah this is like a movie uh my one of my favorite movies
54:08
Eternal Sunshine of the Spotless Mind and uh in fact the technology some of the Technologies are there uh there’s
54:15
something called propranolo and you know I mentioned PTSD and you know soldiers coming back uh propanol is a beta
54:21
blocker and um if you give if you take beta blockers uh propranolo early enough you can
54:29
actually uh uh cause the memory not to get stored into long-term memory so our
54:35
memories have to be Consolidated right there’s a consolidation stage once you experience something traumatic but if
54:41
you use propranolo it dampens that emotional salience of that experience and then it doesn’t get Consolidated the
54:49
the downside of propranolo is that you have have to take it almost within 24 hours but propranol is something that you can
54:56
actually take and people have used that as therapy uh for to kind of address
55:02
things like PTSD um like and it’s it’s sort of it’s a
55:07
kind of memory Erasure in the sense that you just don’t remember it or it’s you don’t remember it becomes more easy that
55:14
particular experience there I’ll just add that they’re more high-tech things that are coming online uh so the memory
55:21
thing is really interesting because uh there’s a consolidation stage but there’s also the reconsolidation stage
55:27
which is whenever you think about a memory you have to put it back together again right and putting that memory back
55:35
together requires certain proteins so a colleague of mine here at NYU Andre Fenton is actually working on that and
55:42
he discovered that there’s a something called PKM data is like a protein where if you stop that protein from being able
55:49
to you know Express itself then certain memories don’t get reconsolidated in
55:55
that way you can actually erase cause the memory to kind of go completely out and then there’s one other thing which
56:01
is you know what the uh MIT there’s a group at MIT they’re using something called optogenetics which is light using
56:08
light and they genetically modify certain mice and they were able to insert false memories into these mice uh
56:16
you know sort of mice who have never so the way it works is uh you know they
56:22
shock the foot of a mouse and they sort of figure out the neurons that you know where the neurons were activated right
56:28
and then they take and then they took the mice who were never been never been
56:35
shocked never been in that location and then they activated the same neural network in that location and then they
56:42
had the freezing Behavior exactly as if they had been shocked um yeah which is uh it’s like Inception
56:49
so how do you and how how do you keep up from an ethical standpoint with these these Technologies I mean this is It’s a
56:56
revelation to hear about this it’s also frightening and intriguing and it just gets me thinking like how do you keep up
57:02
with it and maybe this is even in the last you know a few minutes that we have when you think about you know advice or
Concluding Thoughts
57:09
for any you know upstart new venture or scientist or researcher that’s
57:14
developing something or a any corporate sort of Innovator that stops up that has
57:19
you know that kind of plays in the margins if you will um what guidance or recommendation tips
57:26
would you give them so that they don’t end up kind of running a foul yeah so
57:31
that’s a great question and there are different ways to think about the ethics of this thing the way I like to do it is
57:37
uh sort of human-centered approach is based on human rights uh uh you know
57:43
individuals all of us have rights and if you start with there and you think about
57:48
technology you go from there you can actually if you figure out a lot of things become clearer and easier right
57:54
how do we make sure that certain people’s rights are protected promoted not undermined and so on and so forth
58:00
right and just using that framework uh with human rights in mind can get us
58:06
very far let me give one example so uh right now a lot of companies you know
58:12
there’s like a big data problem like all the companies with AI they need to collect a lot of information right
58:18
um so the the the prevailing thought is hey if someone you know says consent
58:24
consents to uh giving their data away then we can use it for whatever we want right the human rights approach says not
58:31
so fast right you need to make sure that uh first of all are you using this this
58:37
data in a way that’s going to promote human rights promote this individual’s human rights and so on and so forth right and so that makes it that that
58:45
puts the illness on the company the researchers that really think through the ethics of it right we can’t just sort of well now we can use this data so
58:52
we’re going to use this data to discriminate against this person create facial recognition it’s not going to make the person’s you know undermine the
58:59
person’s rights and so on and so forth no you know like you know the human rights approach will stop like we’ll say
59:04
hey we gotta think about that and maybe those are bad ideas great great input we
59:11
are out of time I I wish we could just talk for another couple hours and maybe we can but uh you all won’t be able to
59:17
listen to it but Matthew amazing Insight examples um and just stories bringing this to
59:24
life um first of all big thank you to you for joining and sharing your wisdom and
59:30
thank you for all of our deliberate innovators out there listening we will have our next episode coming up in May
59:36
we’re gonna have a double episode as we’re going to talk more about patient advocacy and patient engagement but
59:43
again big thank you to Matthew and also for all the great insight and wisdom so
59:49
join us for our next deliberate way where we will be looking at the patient
59:54
perspective in the patient centricity so thanks for tuning in we will see you
59:59
next time
Dan is the Host of the Deliberate Way Podcast and is a professional moderator and featured TED Talk keynote speaker.
When Dan isn’t off interviewing health and wellness pioneers, his is running a Femtech Start-Up business, LiviWell, as well as leading the Innovation Advisory firm, Deliberate Innovation.
Dan is a widely published author in the field of corporate innovation, as well as a contributing writer for multiple journals. And once upon time, Dan was an executive at Pfizer, heading up the World Wide Innovation Group and developing the award winning Dare to Try Program.
Dan did his graduate studies at New York University’s Stern School of Business in Political Economy and Entrepreneurship. And when he is not working, Dan volunteers as a wrestling and soccer coach.