NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Paper2Video: Automatic Video Generation from Scientific Papers (arxiv.org)
hirenj 1 hours ago [-]
This is great - now I can get the authentic conference experience of a disengaged speaker reading out the slides in a monotone, without all the hassle of international travel and scheduling.

In all seriousness, there could be more utility in this if it helped explain the figures. I jumped ahead to one of the figures in the example video, and no real attention was given to it. In my experience, this is really where presentations live and die, in the clear presentation of datapoints, adding sufficient detail that you bring people along.

netsharc 21 minutes ago [-]
There's porn site (is it even porn if it's just nudity) which niche is women reading the news while taking off their clothes.

For papers, it doesn't have to go that far, but I imagine a polished AI girl (or guy) reading the summary would be more engaging.

Hah, "SteveGPT, present your PowerPoints like Steve Jobs did!"

fsh 3 hours ago [-]
The samples from the authors' GitHub are just some text vomited onto slides, and the AI voice reading them point by point. Exactly the opposite of a good presentation.
mattjenner 2 hours ago [-]
This might likely develop faster than your typical researcher's presentation skills. It could also increase access more generally. Science communication is a skill, plus an interested reader's ability to get to a conference (or watch the recordings) is limited. If this expands access to science, I'm for it.

(and I generally think AI-produced content is slop).

davidsainez 1 hours ago [-]
IMO this seems like exactly the use cases where AI fails consistently: engaging storytelling and finding the simplest solution to a problem. For example, LLMs are really good at generating walls of code that will run but don't really have good taste in architecting a solution. When I use them for coding I will spend time thinking of a good high-level approach and then use LLMs to fill in the more boilerplate style code
ninesnines 3 hours ago [-]
Ah I guess if you’re very bad at presentations, then this could be beneficial. However, scientific presentations are meant to be communicating science and making things stick to your audience (no matter if it’s scientists or children you’re presenting to). This does not fix that problem at all. For anyone thinking of using this: please watch: https://m.youtube.com/watch?v=Unzc731iCUY and maybe a talk from Jane Goodall on how to engagingly show your science. I would hate to see a lot of conference presentations be made with this generator.

Another thing that improved my personal presentation skills was noting down why I liked a presentation or why I didn’t - what specific things a person did to make it engaging. Just paying attention to that improved my presentation skills enormously

sebastiennight 3 hours ago [-]
Very interesting project, and I found two things particularly smart and well executed in the demo:

1. Using a "painter commenter" feedback loop to make sure the slides are correctly laid out with no overflowing or overlapping elements.

2. Having the audio/subtitles not read word-for-word the detailed contents that are added to the slides, but instead rewording that content to flow more naturally and be closer to how a human presenter would cover the slide.

A couple of things might possibly be improved in the prompts for the reasoning features, eg. in `answer_question_from_image.yaml`:

  1. Study the poster image along with the "questions" provided.
  2. For each question:
     • Decide if the poster clearly supports one of the four options (A, B, C, or D). If so, pick that answer.
     • Otherwise, if the poster does not have adequate information, use "NA" for the answer.
  3. Provide a brief reference indicating where in the poster you found the answer. If no reference is available (i.e., your answer is "NA"), use "NA" for the reference too.
  4. Format your output strictly as a JSON object with this pattern:
     {
       "Question 1": {
         "answer": "X",
         "reference": "some reference or 'NA'"
       },
       "Question 2": {
         "answer": "X",
         "reference": "some reference or 'NA'"
       },
       ...
     }

I'd assume you would likely get better results by asking for the reference first, and then the answer, otherwise you probably have quite a number of answers where the model just "knows" the answer and takes from its own training rather than from the image, which would bias the benchmark.
tobwen 44 minutes ago [-]
Hrhr, I'd love to have automatic CODE generation from Scientic Papers :D
progbits 33 minutes ago [-]
Damn, they automated Károly Zsolnai-Fehér
ks2048 8 hours ago [-]
Project page (links to both github and arxiv): https://showlab.github.io/Paper2Video/
anothernewdude 4 hours ago [-]
This is the opposite of what I want. I'd rather turn videos into articles.
Lerc 3 hours ago [-]
People a different, I would prefer paper to video, but this iimplentation is not yet sufficient for what I would use. But as Doctorcarolorangyfaheer says maybe a few more papers down the line
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 10:33:12 GMT+0000 (Coordinated Universal Time) with Vercel.