We’ve gotten 595 dollars on kickstarter!!!
Keep spreading the word, if we get to 2500 then we’ll be able to get the lights and generator we need for our upcoming shoot.
So here is more about the software I talk about in the above video that I hope a developer can start a kick starter page to create. My hope is that my ideas can be credited in the software “about” notes and that the software could be made free for all to download and the source be provided for others to use for more free software.
So here are my personal needs from the software and how I see it being designed:
I need software that maintains the time stamps from AVCHD clips by putting them into the file names when I conver them to pro res for editing in final cut pro.
So here is what the work flow would be. Put in the SD card that had AVCHD footage on it. It would prompt me with the question, “who filmed it” I’d type in my name, then it would ask for camera nick name and i’d put in “GH1”. Then it would ask what project I was shooting and I’d put either a job, film or I’d just type in “LIFE-LOG” if it was just general footage. Then it would ask for a city and state and I’d enter that. All of these fields would be optional and the user could make more fields if they wanted. These fields later will determine the file names that get transcoded.
Then it would ask if I wanted to off set the time base at all and in most cases I’d say no. But if I had discovered my cameras clocks were 1 hour off from each other from having travled from another time zone then this is where I’d fix that.
Then it would ask the user to designate one, two, or three locations where you want the cards content to be backed up. It would save the info from the previous ingest so in most cases you could just glide through that step. Then it would ask you the resolution you want your “proxy” to be and your “master clips” to be. Now your proxy will be usually 640 by 360 in pro res. And for the master it would be 1920 by 1080 in Pro RES HQ. And finally, an option for a “thumbnail” that would be 120 by 90 could be checked off. After that page you could hit “next”.
Then it would ask you what frame rates you wanted. The default would be that everything that was shot 30p becomes 23.98, everything that is 24p becomes 23.98 and everything 60p becomes 23.98. In the case of footage from the GH1 it’s important that it recognize the 24p material that camera shoots even though it’s in a 60i wrapper. So those would of course end up at 23.98, but not by being slowed down like the 60p material would, but by doing the magic that other applications have been able to do where it just sucks the 24p signal out of the 60i wrapper. This app would be built for the GH1 more so then the other AVCHD cameras out there so this functionality is completely critical. All indie filmmakers using the Gh1 wil want to shoot in the 1080 24p mode so they’ll really need true 23.98 files at 1920 by 1080 to be spit out.
Also being able to set rules on incoming frame rates and outgoing frame rates filmmakers will be able to freely shoot 60p and then 24p and then 30p all depending on the speed at which they want their footage to play back. In my case I’d want all footage to have it’s speed changed to 23.98. That way I have two types of slow motion I can do, slight, shooting in 30p or extreme shooting in 60p. The other huge advange here is that by simply setting my proxy size and my master size, all the footage I shoot will be scalled to a universal pixel dimension regardless of what resolution the camer recorded it in. This is great because all my 720 footage will be upscalled to be the same resolution as my 1080 footage. So they’ll work best on one timeline. Plus the offline to online jump would happen more smoothly. In the most ideal world all up resing 720 or even lower resolutions to 1080 would takes place in a super souped up high end technilogically advanced way like many of the up res software and plugins that exists.
Once all that is filled in, you’d hit “GO” and it would imediately do the following.
In the destinations you choose earlier it would then make a folder for the shooters name that would look like this:
Inside that folder it would make a sub folder called:
Inside that subfolder it would make another subfolder called:
Inside that subfolder it would create another sub folder that was named based on the range of the creation date and time stamps of the AVCHD video files and that would look something like this:
Inside of that folder would be the exact contents of what was in the “private” folder of the SD card. Or, if you were running this application on a folder that had previously backed up the SD card, that would work too.
The files would back up to one destination at a time. The way I’d do this is I’d have two 500 gig 2.5 inch drives hooked up to a macbook pro and one would be named LOVE 006a and the other would be named LOV 006b. When I was prompted early to choose two destinations to back up the cards I would have choosen the root of each of those drives. That way I’d have a folder on the drive for each shooter. Regardless of when they shoot, and regardless of what camera or what project they are shooting. This might seem counter intuitive, but trust me it works out to put everyones footage they ever shoot in one place. So anyway, this process would cause these two drives to be identitcal if I always used them in this way which I would. I’d store all my FCP and other project files on my internal drive and just be using these as media drives. So after backing up the AVCHD “private” folder contents it would check the size and file count in all three locations and give an error in a log if something didn’t add up right letting you know that maybe an SD card had dismounted or something.
Once it was done backing up to both drives it would then start to transcode. At this point it would first eject your SD card and give a pop up that said, “your SD card has been backed up and ejected, you may now remove it.” Then it would ask if you wanted the transcoding to begin or if you wanted it added to the que. This alert would be said in computer voice so you knew it was ready from acros the room. If you didn’t do any action with in 30 seconds it would start the transcoding. What it would do would be that in the “ARIN_CRUMLEY” folder it would create a folder called “COMPRESSED”. Inside of that folder it would create a folder called “VIDEO” and inside that folder it would create a folder called “640_by_480_23_dot_98_fps” All of the 23.98 640 by 480 pro res files would be set in there as they were getting compressed. The pro res files would be named like so:
Now I should mention here that this is the applications main purpose. It would be looking at the time stamp of when the AVCHD video file was created and it would be using that date and time to create the file name you see above. And it would creating the rest of the file name based on the other optional information the user had entered. This is helpful because it prevents you from ever having two files named the same thing. There is only one Arin with a GH1 at that zip code that hit record at that second. So it’s a completely unique file name. Unique file names are important because final cut pro gets very confused when it tries to reconnect to two different file names with the same name. They are also imporant for web sharing and further transcoding where you want a trail of meta data saved in the file name it’s self so that where ever the file goes, this important time, shooter, project and location goes with it.
So as each video clip finishes it’s proxy transcoding it would then be copied from drive A to drive B in the same destiantion folders keeping those drives in identical and keeping the computer multi-tasking. The bottle neck here will probably be the processor, not the drive speed, so copying while also processing should be fine. Once that was done it would cross check the number of files in the AVCHD folders and the number of files that had been transcoded and it would also cross check the durations and it would give you an error and let you know if anything did work out and add in the Log it was successful. Then if “master files” had also been checked off it would begin to transcode the 1920 by 1080 files. All of those would go in the “COMPRESSED” folder inside a folder called “VIDEO” and then inside a newly created folder called “1920_by_1080_23_dot_98” For different resolutions and different frame rates it would create different folders. If a folder already existed from the last time this proceedure was done, it would just utlize the already existing folder. And finally, after the masters, if that “thumbnails” had been checked off, it would make those and put them in a folder along side the others called “120_by_90_23_dot_98”.
Now one thing I should note is that all of the media files have the same name no matter what resolution they are. This is for easy offline editing and then simply reconnecting to the larger or smaller files as needed. The reason this won’t confuse Final Cut Pro is that the files are in folders that are different. So the destinaton of a clip is unique even though the file name is not unique.
Once everything was done it would cross check it’s self to verify it really got everything done and then would give you the log file and make a sound letting you know it was all done.
It would then ask you if you wanted to create a final cut pro project file or add all the clips to an existing final cut project file. It would allow you to choose which of the three sizes you had just made and what I’d do is I’d choose the proxy and to make a new project. Then it would launching FCP, make a new project and put all of my 640 by 480 files in a new bin which I could then renamme. I’d then start editing away. The first thign I’d do in editing is I’d drag the 640 by 480 clip down into my timeline. FCP would ask me if I want to adjust my sequence settings to match my clip and I’d say yes. Then nothing on that sequence would require any rendering. And if I did color or effects it would all be real time since it’s pro-res. And if I was doing heavy filters i’d use unlimtted mode which would allow play back even when heavily processed. I’d also likely change my render settings to render things that i want to simply just quickly preview in lower resolutions.
Then once I have an edit done, I’d highlight everything in my timeline and from the tools menu I’d choose, “make media offline”. Suddenly everything would turn red on my timeline and their would be no picture. Then I’d choose under file, “reconnect media”. Then I’d navigate to the master files and when I found my file I’d select it and hit choose. Then I’d say “yes” to reconnect everyting in this path. Then it would warn me some of the settings had changed and i’d say “okay”. Then everything might look zoomed in. So what I’d do is I’d highlight everything and then hit apple C. Then I’d hit “apple” 0 and then I’d choose a the preset that matched my mast clip settings. So I’d choose 1080p PRO RES HQ. Then I’d hit okay and with the new settings now in effect I’d hit “apple V”. And the whole zoomed in problem should go away at that moment. Also nothing should need any rendering unless it had effects or plug ins. At that point I’d be able to further tweak the color correction which I’d do with the 3 way color corrector and the 1 way color corrector and then I’d be done. I’d master my picture locked video and then take it to an audio environment to mix 5.1 96KZ sound. Then I’d bring back that sound and master my final blue ray files.
Now a future version of this app would actually automate all of those last steps I described. So it would basically have an FCP plug in that you could choose “make master quality” and it would take a proxy edit and just up res it all to 1080. And in a even further along future version it could do that but even handle the sitaution in which the 1080 files were never created and what it would do is it would go in and it would create them from the original .mts files that were still backed up. And it would even use handles and time code to just take a few seconds on either side of the clip that got used. And in an even further in the future version of all this, you could be editing the thumbnails on a server where a bunch of users are commenting on parts they like and the server would give you local packages of content that were in the right directories to sync up with FCP projects. So you could start to make an edit on the website using the thumbnails someone uploaded, then once you felt you had the right 24 clips you wanted to be working with, you could use a peer to peer network of video editor computers that could seed you those 640 by 360 files and you could then pull those in and edit them locally. And in an even more future version of this all. Your computer would join a video editing “cloud” and when someone went to go master an HD master of a video that you had some of the shots on your computer to complete, the designated 15 percent of your computers processing speed or 100 percent of it, if you weren’t using it, would be utlizied to transcode and render out just the clips that you had on your computer, even if they didn’t exist as pro res yet, it would make them from the .mts files, just the lengths of the clips that got used in the edit, then it would either transport the resulting files to the master editor or it would possibly even just provide through the network the information that describes the frames letting the other main computer actually encode the final rendered video which at the end would already be uploaded to the cloud computer that had been managing all of these local CPU’s and hard discs.
Using the above technolgoy we can transform DIY filmmaking from a single camera activity to an enivornment where films are made from thousands of shooters all across the world.
And the first step is really being able to keep the time stamps, shooter, project and locaiton all in the file name and then starting to standardize the folder structures in which footage is worked with.
I know AVCHD has a lot of problems, but what I’ve described above addresses almost all of them and if you consider how efficient the file sizes are of AVCHD, it really makes sense to store raw footage in that file size because you can back it up easily, transfer it on the web and do so many other things that you could only do with really light weight footage. Also, no matter what these days you have to transcode footage you’ve shot before you can edit, so you mine as well have it be that the transcoded files are the big as bad ass color correctable files. But also everyone is portable now and many edit on the road. So being able to only really need to worry about small 640 by 360 files makes for a very doable portable post production environment.
So that is the system, who ever thinks you can do make this, figure out how much of your time it would take, how much you need to get paid, copy and paste and edit as needed what I wrote above into a kickstarter.com project. I’ll help promote the kickstarter project and we can get this software made. And quick, I’m shooting a feature film in 9 days and need this! haha, if it’s not ready in time, I’ll understand, but man, if it was, we’d make history with that behind the scenes story. But either way I’ll need all this functionality and so will thousands of other filmmakers. So if you are reading this and you are excited about making this a reality, lets do this!