24 Comments

I quit a job over this, due to what I perceive as lack of ethical boundaries. Boss wanted me to create AI-generated articles on mental health topics for (vulnerable) people seeking therapy. Without getting into too much detail, I urge people to be very wary of articles found in magazines, blogs, social media etc. when seeking medical advice. Takes fake news to a whole other level.

Expand full comment
Jan 8, 2023·edited Jan 8, 2023

I've checked out ChatGPT and got some really great but simple articles. Your experience is fascinating. I will definitely share this article with my wife who is involved in writing articles on healthcare guidelines etc.

We were discussing the issues with students using these AIs for writing their research papers and the possibility that in the future plagiarism checkers will also be programmed to spot these AI written articles.

FYI, Just did some research myself on your sources. It seems that the American Zoologist published articles from 1966-2001. So your reference above couldnt have existed if it was published in 2003!

But one thing I would like to add that the ChatGPT is in Beta phase and its in the process of learning. So if the references are incorrect the programmers may make note of that and make the necessary corrections.

Expand full comment

Wow! There should be a blue check or similar for posts that are not written by AI

Expand full comment

I must say I thought the poem was quite nice.

Expand full comment

FYI, I put this whole post into Chat GPT and it responded:

I'm sorry, but I don't have any information on ChatGPT or the specific scientific papers you mentioned. It's possible that the references provided by the AI program were incorrect or that the papers themselves do not exist. It's always important to verify the accuracy and validity of information, especially when it comes to scientific research.

Expand full comment

When I was playing with it it made stuff up too. In my experience when I added “and if you don’t know, say ‘I don’t know’” to the same prompt, then it simply replied ‘I don’t know’

Expand full comment

From the picture generated, the AI seemed to confuse a RABBI for a RABBIT! Not too intelligent.

Expand full comment

Invention of sources on the fly is not new. I know of one human doctoral candidate who answered a question at his oral dissertation defence by citing a non-existent article from an issue of a journal from before that journal had started being published. He boasted about this at supper that day or lunch the next. He wasn't caught, and went on to an illustrious academic career.

The incident was over 50 years ago.

Expand full comment
Jan 8, 2023·edited Jan 8, 2023

Fascinating. The snarkiness of ChatGPT reminds of the following story from the Israeli legal war chest:

About 35 years ago, two legal titans (IIRC, David Libai and Ram Caspi or Moshe Shahal) appeared opposite each other in a case before the Magistrate judge and from what I’m told, which unfortunately is an all too common phenomenon, the judge was not the sharpest tool in the shed. At one point the case was going in favor of one of the attorneys so the 2nd attorney started to make up a legal precedent from a British court that had the very same circumstances and in this fake case, the court ruled in favor of the 2nd lawyer’s position. Without beating an eyelash, the 1st lawyer who knew 2nd lawyer was lying through his teeth, cut off the lawyer and told the judge: “Your honor, while my esteemed and learned colleague is 100% correct with respect to this precedent, he neglected to note that this verdict was reversed on appeal by the House of Lords!” And the Magistrate Judge proceeded to rule in favor of the 1st lawyer 😎😎😎

Expand full comment

I wonder if AI could impact on SEO such that it manages to get its made up references to float to the top of search results with links to articles that it creates. Then, most people who try to track down the sources will find something. How many would stop to question whether what they found might be made up by the same AI that made up the fake references. It's a feedback loop.

Man cannot know Absolute Truth. This has always been the case, but much of what has been going on with social media and online access to so much information has made this abundantly clear. History is written by the winners. No amount of reading can ensure with certainty that you know all the factors involved in any situation.

As an observant and believing Jew, this all points me to the Torah (and Hashem). The Torah is a text that has existed for millennia, passed down unchanged (or only with very slight changes that are dutifully recorded and passed down as well) from generation to generation. Linked from parent to child, read week after week publicly and all over the world wherever there are observant Jews, in an unbroken chain. The Jews in China read the same Torah as the Jews in Israel. My brother learns from my father's old sefarim with his notes recorded on the side. That is Truth, with a capital T.

The readers of this site are more likely to dig for real truth than your average Joe. In ten years' time there will be a generation of young adults who will have firm beliefs based on the online information distributed most effectively today, regardless of the truth. But those who keep the Torah in their sights are more likely to keep their eye on life's real goals and not the transient "current" truths.

Expand full comment

A colleague asked it how many trees worth of toilet paper she had used in her 62 years. The answer seemed plausible until I looked at the basic arithmetic. It was way way off. She submitted the correction to the AI which apologized, and revised its answer, still very wrong.

Don't blame the AI, blame the people who programmed it.

And don't ride in a self-driving car powered by AI until they get the bugs out.

Expand full comment

HEY! The readers of Irrationalist Modoxism already ARE the readers of RJ! The only place where IM is advertised is on the little link that comes up next to my name when I comment here or when I link to a specific post. So all the readers already saw all the lokshen that goes on over here and then come on over to my site to get a good laugh!

Anyone have Yaron Reuven's email address? ;)

Expand full comment

This is a great story.

The whole subject of AI is one that deeply fascinates me (although I'm not an expert or professional in the area, so please take the below as an informed armature's best understanding only.

As a couple of other posts have mentioned, the GPT model is a a language prediction model, so basically it is predicting what the next set of linguistic "ideas" (not really words per say) should follow the input text. It doesn't have the ability to really look anything up per say, and will tend to get the general gist far more correct than the specific details, especially when the details require a non-language type of logic to complete.

The simplest example of this is with math problems. If you give Chat GPT (GPT3 really) a simple math problem, like basic addition, it will tend to get it correct - even if the specific question is not in the training set. (This alone was actually surprising to even to its makers, as the model is not programmed to do math per-say, but merely "predicts" the language response to one question from similar cases in the training set. But as you make the math problem a little more complex, e.g. ask it for a power of some number (as an example I asked it for 0.936^15) it can come up with pretty incorrect answers (in this case 0.5715 vs. the correct 0.37...). If you ask it a question where the answer to it is some sum like this (e.g. "If I throw a fair die 100 times, what is the probability that no sixes will occur") it will give an answer which is linguistically accurate ("In order to calculate this you need.... which is (5/6)^100), but with the incorrect answer (in this case as an example 0.000028 vs. the correct 0.000000012..., orders of magnitude out!).

The point of all this is that obviously very simple computer programs can do the calculations correctly. Chat GPT is not trying to compute the answer to the math problem at all, it is trying to predict what linguistic idea comes next based on the previous. It actually has no idea that it is even doing a calculation at all! If it did it could easily appropriate some of its vast computing power to simply do the calculation. But it doesn't. The fact that it can do so much stuff that isn't obviously related to simple language is an emergent property of the overall model (and it really is surprising how good it is at stuff that no one really thought it would be), but we shouldn't be the least bit surprised when it does stuff like what it did in RNS' case.

It will be interesting to see what happens as they come out with future models (I believe GPT4 is due in 2023 or 2024), but as GPT3 is already very computationally expensive and the size of the data-set it has been trained on is already so big that no data set several orders of magnitude bigger exists yet, my understanding is that they will have to spend far more effort making it work "smarter" rather than simply "bigger" as they did up until now (e.g. GPT2 to GPT3).

Expand full comment
Jan 8, 2023·edited Jan 8, 2023

Someone explained to me the AI is mainly trained to calculate what the next word is most probably going to be, a bit like the suggestions on smartphone keyboards (but a lot better). So it's going to make up answers to any question for which it needs to 'work', meaning not copy-paste a Wikipedia or news article.

See this:

https://nautil.us/welcome-to-the-next-level-of-bullshit-237959/

Expand full comment

This comes as a surprise? You know Wikipedia is made up by armies of left wing partisans, who rely on other left wing created "sources" to create a self-referential universe. You know facebook et al made up "trending" tabs (which they initially claimed were based on "algorithms") to support its agenda. And you know this new bot, by its creators admission, automatically removes any facts or information inconveniently "racist" or "sexist". So what exact do you find surprising here?

Expand full comment

When I first discovered ChatGPT last month, I spent an hour asking it halakhic questions from different perspectives. See here a set of 3 tweets including 12 answers expertly addressing the question of whether one may use an umbrella on Shabbat to which I received many compelling answers.

https://twitter.com/IsraelTechLaw/status/1598676457307398145?s=20&t=XOQJrqPu49hoklREkau5Gw

Expand full comment