Large language models based on transformers, like chatgpt, gpt3 etc are all the rage. However, a lot of people misunderstand what they do. These models can generate correct looking text/image/videos and other kinds of media. Basically, it is Gmail autocomplete on steroids (a very high dose though!). What that means is that given some hints, chatgpt/gpt3 can generate well formatted, large form text which, due to very large training dataset, looks and even reads like human written text. This doesn't mean the model really understands what it is spitting out. Let me explain.
We have seen variants of this, such as dalle-2 and stablediffusion that can generate images and videos basis given text input. Now these images do not have much meaning but are very creative in nature. They are most of the time not even "correct" but the point of these models in not to be correct. They are "generative models". That means they generate new things basis given hint/nudge. Think of these models as left side of the brain. Very creative but possibly not much logic. Same is the case with the text output. However, I would admit that it does feel very very real in the first sight.
Now let's come to the point as to what these models can do well: tasks which require reading text, watching videos or images and then rewriting them in a different form can 'possibly' be handled by these models. It can write essays, do deep search over very large corpses of text, write SEO friendly articles, generate a whole website, which are as good as human generated ones. Therefore, jobs which depend on such skills, have danger to be disrupted by these LLMs.
In future, such models, can be made to consume tabular data and generate decent presentations and graphs...may be even videos. They can explain a given picture or even be made to watch a video and then write a summary of it.
However, they are not really good at understanding the meaning of a given text and neither it is good at logical reasoning. It probably cannot extrapolate well, yet. Neither it can think of edge cases, fraud angles, etc. It definitely cannot make decisions basis cost benefit analysis. Basically any task which requires mixture of left brain and right brain such as decision making, empathy in business environment, thinking about user experience etc.
Therefore, I am pretty certain that it won't take away jobs that require more than just consumption of knowledge and spitting it out in a different form. At least, not yet!
Tuesday, January 17, 2023
Large language models and job disruptions!
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment