hckrnws
This is very cool. I built one of these myself around Christmas; Claude Code can put one together in just a couple prompts (this is also how I worked out how to have Claude test TUIs with tmux). What was striking about my finished product --- which is much less slick than this --- was how much of the heavy lifting was just working out which arguments to pass to ffmpeg.
It's surprisingly handy to have something like this hanging around; I just use mine to fix up screen caps.
Commenting mostly because when I did this I thought I was doing something very silly, and I'm glad I'm not completely crazy.
You can use AI to figure out the arguments to ffmpeg. But indeed it seems like there's just a single call to FFmpeg CLI to power the whole thing which is amazing.
ffmpegCmd := exec.Command("ffmpeg",
"-ss", fmt.Sprintf("%.3f", position.Seconds()),
"-i", p.path,
"-vf", strings.Join(filters, ","),
"-vframes", "1",
"-f", "image2pipe",
"-vcodec", "bmp",
"-loglevel", "error",
"-",
)Invoking ffmpeg, gzip and tar commands is a sort of reverse Turing test for LLMs
To access this website, you must produce a valid tar command without alt-tabbing. You have ten seconds to comply.
> you must produce a valid tar command
Define "valid"? If you mean "doesn't give an exit error", `tar --help`[0] and `tar --usage`[1] are valid.
[0] For both bsdtar (3.8.1) and GNU tar (1.35)
[1] Only for GNU tar (1.35)
Damn, you solved it!
I feel so bad that I need to google every single time I need to untar and unzip a file :(
On MacOs I just press space and trim with finder. Even avoids re-compressing.
I think this is the first instance I've seen of an actual terminal video player. Very fun to play with.
mplayer, mpv and I think VLC can do it, with the right output driver settings (libcaca or a few other choices.)
You can just use ffmpeg to extract frames, and then just render the raw images with unicode blocks.
(There's Kitty Graphics too, but I couldn't figure out how to make terminal UI layout work with it.)
I'd use ffmpeg to downscale the frames to the terminal size too. There are also various filters that could help quantizing the colors to what your terminal supports. The paletteuse filter will get you free dithering too.
yeah I remember learning this trick in like 2007 with libaa and later caca for color.
It looks like this app is shelling out to ffmpeg to get the bitmap of a frame and then shelling to something called chafa to covert to nice terminal-friendly video.
I have been using this one[0] and it is small, fast, and seems to work pretty great for me so far.
Happy to hear! Some of my thoughts when building it:
- I haven't implemented audio support yet, but it would be nice
- I like --dry-run
- I didn't use a TUI widget library, but now it's at the point where it's tedious to refactor the UI / make it prettier
- I like OP's timeline widget
- Wanted to focus on static binaries. I got chafa static linking working for Linux, but haven't bundled ffmpeg yet
- which reminds me of licenses -- chafa and ffmpeg are LGPL iirc
- a couple other notes from early on: https://wonger.dev/posts/chafa-ffmpeg-progress
It's interesting how terminal apps are increasing in popularity after decades of desktop and web apps. I wonder if it's the talk to the chat AI that's making people more used to asking a prompt screen or if it's the simplicity and lack of bloat.
being true cross-platform is a bigger draw for me. once something works on one platform it will usually work on any other platform that has a terminal.
im in the process of switching to neovim as my main editor just so i can have the same setup everywhere. IDEs like vscode are 'cross platform' but only work on desktop, and there are IDE-like editors for android but none of them work on desktop. oddly enough neovim on android/termux is actually easier to use than any of the IDE editor apps mainly due to everything being keyboard based
when it comes to writing my own mini programs/scripts, is basically the promise of things like flutter where you can write something once and run it everywhere, only it takes hours instead of days to throw something together and its not as overkill because im just using python or bash and then fzf or textual for any interactive parts
I asked about this tool 3 days ago, HN is a magical place! https://news.ycombinator.com/item?id=47363432
If you dont like leaving your main video player, IINA on mac is scriptable, so I just use shortcut keys to send start/end indicators to a script which runs ffmpeg on the timestamps.
Im sure other video players like VLC support this, but I found VLC's apis very lacking.
mpv has plugins for this like https://github.com/serenae-fansubs/mpv-webm
I don't find trimming videos with ffmpeg particularly difficult, is just-ss xx -to xx -c copy basically. Sure, you need to get those time stamps using a media player, but you probably already have one so that isn't really an issue.
What I've found to be trickier is dividing a video into multiple clips, where one clip can start at the end of another, but not necessarily.
I don't find Sharing files with people very difficult, just login to your FTP and give an account to another user. - Person commenting on OneDrive
Missed opportunity to reference the famous Dropbox hn comment.
I just think there are other closely related use cases where a separate program can add more value, especially in the terminal. I wouldn't suggest most people should use ffmpeg instead of a gui, those are too dissimilar. Another example is cutting out a part of a video, with ffmpeg you need to make two temporary videos and then concatenate them, that process would greatly benefit from a better ux.
Point of order: the Dropbox HN comment is famously misconstrued. People think it was about Dropbox; it was about the Dropbox YC application, and was both well-intentioned and constructive.
> with ffmpeg you need to make two temporary videos and then concatenate them
It can be done in a single command, no temp files needed.
[dead]
There's nothing easy about it. Here's a taste.
# make a 6 second long video that alternates from green to red every second.
ffmpeg -f lavfi -i "color=red[a];color=green[b];[a][b]overlay='mod(floor(t)\,2)*w'" -t 6 master.mp4; # creates 150 frames @ 25fps.
# try make a 1 second clip starting at 0sec. it should be all green.
ffmpeg -ss 0 -i "master.mp4" -t 1 -c copy "clip1.mp4"; # exports 27 frames. you see some red.
ffmpeg -ss 0 -t 1 -i "master.mp4" -c copy "clip2.mp4"; # exports 27 frames. you see some red.
ffmpeg -ss 0 -to 1 -i "master.mp4" -c copy "clip3.mp4"; # exports 27 frames. you see some red.
# -t and -to stop after the limit, so subtract a frame. but that leaves 26...
# so perhaps offset the start time so that frame#0 is at 0.04 (ie, list starts at 1)?
ffmpeg -itsoffset 0.04 -ss 0 -i "master.mp4" -t 0.96 -c copy "clip4.mp4"; # exports 25 frames, all green, time = 1.00. success.
# try make another 1 second clip starting at 2sec. it should be all green.
ffmpeg -itsoffset 0.04 -ss 2 -i "master.mp4" -t 0.96 -c copy "clip5.mp4"; # exports 75 frames, time = 1.08, and you see red-green-red.
# maybe don't offset the start, and drop 2 at the end?
ffmpeg -ss 2 -i "master.mp4" -t 0.92 -c copy "clip6.mp4"; # exports 75 frames, time = 1.08, and you see green-red.
ffmpeg -ss 2 -t 0.92 -i "master.mp4" -c copy "clip7.mp4"; # exports 75 frames, time = 0.92, and you see green-red.
# try something different...
ffmpeg -ss 2 -i "master.mp4" -c copy -frames 25 "clip8.mp4"; # video is broken.
ffmpeg -ss 2 -i "master.mp4" -c copy -frames 25 -avoid_negative_ts make_zero "clip9.mp4"; # exports 25 frames, all green, time = 1.00. success?
# try export a red video the same way.
ffmpeg -ss 3 -i "master.mp4" -c copy -frames 25 -avoid_negative_ts make_zero "clip10.mp4"; # oh no, it's all green!I've never tried doing frame perfect clips like that, that does sound annoying. But from a cursory read of the source, I don't think this program will solve that issue either? Because the time stamps in your examples are all correct, and the TUI is using ffmpeg with -ss and -t as well.
func BuildFFmpegCommand(opts ExportOptions) string {
output := opts.Output
if output == "" {
output = generateOutputName(opts.Input)
}
duration := opts.OutPoint - opts.InPoint
args := []string{"ffmpeg", "-y",
"-ss", fmt.Sprintf("%.3f", opts.InPoint.Seconds()),
"-i", filepath.Base(opts.Input),
"-t", fmt.Sprintf("%.3f", duration.Seconds()),
}
I think the best way of getting frame accurate clips like that is putting the starting time after the input (or rather before the output), which decodes the video up to that time, and reencode it instead of copying. Both of these commands gives the expected output: ffmpeg -i master.mp4 -ss 0 -t 1 -c:v libx264 green.mp4
ffmpeg -i master.mp4 -ss 1 -t 1 -c:v libx264 red.mp4Yer, I noticed that this tool was just doing `-ss -i -t` from its demo gif, which is what prompted me to reply. I'm sure people will discover that all sorts of problems will manifest if they don't start a lossless clip on a keyframe. One such scenario is when you make a clip that plays perfect on your PC, but then you send it someone over FB Messenger, and all of a sudden there's a few seconds of extra video at the start!
Can't make frame perfect cuts without re-encoding, unless your cut points just so happen to be keyframe aligned.
There are incantations that can dump for you metadata about the individual packets a given video stream is made up of, ordered by timecode. That way you can sanity check things.
This is terribly frustrating. The paths of least resistance either lead to improper cuts or wasteful re-encoding. Re-encoding just until the nearest keyframe I'm sure is also possible, but yeah, this does suck, and the tool above doesn't seem to make this any more accessible either according to the sibling comment.
> Re-encoding just until the nearest keyframe I'm sure is also possible Yer, I've done that, and it's a pain to do "manually" (ie, without having a script ready to do it for you). I've also manually sliced the bitstream to re-insert the keyframe, which if applied to my clip5.mp4 example, could potentially reduce the 50* negative ts frames to maybe 2 or 3. It would be easier if there were tools that could "unpack" and "repack" the frames within the bitstream, and allow you to modify "pointers"/etc in the process - but I don't know of any such thing.
For frame perfect cuts you need to re-encode. You can use lossless H264 encoding for intermediary cuts before the final one so that you don't unnecessarily degrade quality.
I wonder if there is a solution which would just copy the pieces in between the starting and ending points while only re-encoding the first and last piece as required.
I've been trying to cut precise clips from a long mp4 video over the past week or so and learned a lot. I started with ffmpeg on the command line but between getting accurate timestamps and keyframe/encoding issues it is not trivial. For my needs I want a very precise starting frame and best results came from first reencoding at much higher quality, then marking & batching with LossLessCut, then down coding my clips to desired quality. Even then there's still some manual review and touch-up. It's not crazy-hard, but by no means trivial or simple.
FWIW, here's a simple command line utility for joining and trimming the multiple video files produced by a video camera.
I used a plugin in mpv to do it but I can't find it anymore. You just pressed a key to mark the start and end. And with . and , you could do it at keyframe resolution not just seconds.
Found a few links to projects that fit this description in an awesome-mpv repo.
https://github.com/stax76/awesome-mpv?tab=readme-ov-file#vid...
Appreciate you mentioning the MPV route for making clips, I might actually go through and process all the game recordings I saved for clips over the years.
There's mpv-webm, which is great, but has no way to make a lossless clip AFAIK.
[dead]
Love it! I had this idea before but never took the time to implement it. You did it, thank you.
Doesn't -y mean overwrite file? And isn't there a difference between -ss before -i and after -i?
yes and yes. I assume no further info is necessary for what I also assumed was rhetorically asked
Neat! I did the Emacs equivalent https://github.com/xenodium/video-trimmer
Simple CLI tools like this are underrated. The moment you can pipe it into other commands, it becomes much more useful in automation workflows.
Having to separately download ffmpeg in the windows distribution does not really make sense
Just bundle it
People that use GUIs/tools for things like ffmpeg, rclone etc really want the developer to autodetect if they have it already, and use that instead of installing a separate version/binary.
How do I know? I built one (https://github.com/rclone-ui/rclone-ui)
I disagree, I don't want another ffmpeg binary, I already have one. Winget works well, especially since this is already a terminal program.
afaik winget can automatically manage package dependencies.
What's weird is that I have problems getting the ffmpeg switches right, even if I get llm assistance.
I think I understand the switches, and are demonstrably shown I have no clue.
These days, I'm basically relegated in following pre-LLM blogs and SO in hoping I find the right combination.
I think it is part of a more general problem, I don't think anybody intends to make a terrible DSL it is just a natural progression from.
1.we have a command line program
2.command line args are traditionally parsed by getopt(or close relative) so we will use that(it's expected)
3.our command line program has grown tremendously in complexity and our args are now effectively a domain specific language.
4.congratulations, we are now shipping a language using a woefully inadequate parsing engine with some of the worst syntax in existence.
see also: ip-tables, findI think it would behoove many of these programs to take a good look at what they are doing when they reach step 3 and invest in a real syntax and parser. It is fine to keep a command line interface, but you don't have to use getopt.
I've been using ffmpeg with claude as video editor for long time.
You mean you let create claude command or it itself runs ffmpeg on your local machine and returns you finished cut?
I just let claude use ffmpeg to edit videos.
Could have really used this a couple days ago. I had to record a video an assignment, but due to lack of global hotkeys on OBS with wayland, I had to start and stop the video on the OBS GUI. I tried to figure out ffmpeg but I was too tired and it was getting close to the deadline so I spent some time learning how to to do it with kdenlive.
I guess I can find another implementation to combine trimmed parts after taking out certain scenes?
Write a text file with all the parts like this:
file 'file1.mp4'
file 'file2.mp4'
file 'file3.mp4'
Then call ffmpeg like this: ffmpeg -f concat -i files.txt -c copy output.mp4
And I guess you could make an LLM write a {G,T}UI for this if you really want.Thanks! I don't want to just stitch them. Hoping to have a smooth transition and an easy blend. No jerking between scenes.
Crafted by Rajat
Source Code