top of page

Boosting Your Productivity With AI: Using AI for Summarization


You can think of summarization as having two different strategies:


  • Abstractive summarization: Attempts to summarize a text in one’s own words. This involves abstracting the key points and articulating them in new sentences. 

Pros

  • Results in more concise summaries.

  • Can rephrase, interpret, and condense complex ideas for clearer phrasing.

  • Ideal for creating summaries where a change in style (e.g., from academic to conversational) is beneficial.

Cons

  • More challenging to develop due to the need for advanced natural language understanding and generation capabilities.

  • Higher risk of inaccuracies or distortions in representing the original text's meaning.



  • Extractive summarization: Condenses a text by extracting the most important strings of words. This is like using a highlighter on an article and then only reading the highlighted parts. No new words or phrases will be used, but judgements are made about which existing words are most important.

Pros

  • Directly uses sentences from the original text, ensuring factual accuracy and relevance.

  • Easier to implement (for humans).

  • Great for the research period before writing an essay, since it collects relevant quotes.

  • Suitable for technical or factual content where precision is crucial.

Cons

  • Summaries can lack cohesion and may appear disjointed, since sentences won’t flow into each other.

  • May include redundant information or fail to capture the essence of the text if key sentences are overly detailed.

  • Less flexibility in adjusting the summary's tone or style to suit different audiences.


Our experience has been that LLMs fare reasonably well at abstractive summarization, but are really very bad at extractive summarization. Here are some examples:



Abstractive summarization


Here is Claude offering a fairly good abstractive summary of Peter Singer’s famous paper “Famine, Affluence, and Morality”:



Of course, LLMs don’t understand text in the same way that humans do, and they make mistakes. That’s why, as we have mentioned above, it’s best to make sure either: (1) you can independently verify the information LLMs give you, or (2) the stakes are very low, so you won’t run into trouble if the information they give you is wrong.



Extractive summarization


Our experience has been that GPT-4 struggles to offer extractive summaries. Here is a representative exchange:



Alternatives, such as Claude are more easily able to offer extractive summaries, but they are quite bad at it. If you have read the paper in question (you should, it’s very good!), you will recognize that the following is a very poor summary:



For this reason, we advise extreme caution when attempting to use LLMs for extractive summarization. In order to get good results, you will likely have to work hard on prompt engineering, to get the perfect prompt for whatever it is you want summarized.





✉️ Send us your prompts! ✉️

If you have suggestions for additional uses of LLMs, we'd love you to send them to us (including an example prompt) at info@clearerthinking.org. If we like them, we may add them to our library and credit you where they appear.



bottom of page