In my previous
post, I described how we extracted content from the original FMV files and created tools for analyzing approximately 67 GB of archives in search of the intermediate components used to create the FMV. These parts are the basis for creating remastered FMV content and were used as “assembly drawings” for the start of the project.
As stated in the previous article, the remastering workflow is divided into three branches: remastering hand-drawn frames, remastering 3D models and remastering sound. Below I will talk about the features of the workflow and the tricks that we used to automate the creation of the main part of the video.
We increased the size of all the original hand-drawn frames to match the 4K resolution (3840x2160). Taking into account the addition of the width of the converted scene and the fact that the game was displayed with non-square pixels, this meant that all remastered-resources needed to be created in a resolution of 4440x2400 pixels.
For remastering all the hand-drawn FMV frames, we decided to use Adobe Animate because we already had a ready-made workflow after developing the Day of the Tentacle Remastered. The team of artists mastered this process well, so we did not consider other options.
')
An example of remastering a hand-drawn frameThe original 3D models from the archives were in the format of 3D Studio Release 3. Fortunately, modern versions of 3D Studio Max were able to import all the data of meshes and cinematic keyframes using another automation script. After that, we converted this intermediate file to work in Autodesk Maya, where artists do their own remastering magic.
To give the surfaces of meshes a new style, new shaders were applied, high-quality textures were applied, and the meshes data were significantly complemented to give the model a smoother look. In addition, the frame window was expanded for all cameras of the video inserts to match the working resolution of 4440x2400 pixels, because the camera in the original was designed for a narrower aspect ratio.
An example of remastering 3D modelsAs for audio, we managed to find most of the original high-quality versions, but there were exceptions. The recordings of the English voice acting studio were packed in archives, but the voice acting in other languages, which were dealt with by external partners, was not available to us. In addition, we managed to find the original music of The Gone Jackals, which was used in FMV. Some versions of sound effects (SFX) have been replaced by more “densely” sounding ones with a similar type of sound.
Below is a block diagram approximately explaining how we processed the source resources and linked them to the remastered content. The original extracted (using SanExtract.exe) video frames were used as a “source” for comparison with all archive data files. Archive manifest files are generated by recursive search through all archived data; they were used to quickly find all unique files of a certain type.
The SanWrangler tool was used to visually compare the original source of frames and historical data. The user could visually link archive files to the original frames and save them as a dependency map in XML format. After creating the dependency map, it was enough to use the Python script to automatically generate manually drawn frames from the original resources of the “drawing” files, as well as the “assembly drawings” for Maya 3D. These files became the starting point for the team of artists, which then began remastering.
Extracting original resources and creating “assembly drawings”This was the first of many steps, which as a result allowed us to receive ready-made FMV remastered versions. Yes, of course, now we have the starting point of all the files that need to be redone, but how to put all these fragments together?
Below I will talk about the methods of automation used in the manufacturing process of manufacturing FMV. These methods can be used not only to generate FMV and apply not only to one game; I think they are quite versatile, and they can be used in many aspects of game development.
Like most graphics creation workflows, this process will be iterative. Somewhere in the source file there may be a bug that the artist needs to fix, and sometimes it was necessary to re-export resource dependent files. I think we all would prefer that this work be done by a computer, rather than a person prone to mistakes.
We absolutely knew exactly how Full Throttle Remastered video should look and sound, so we just needed to improve their graphics and sound. All videos had to match the originals, frame by frame, including the camera's movement path, the volume of sounds, panning, etc. To achieve this, we needed to know what the workflow for creating the original FMVs looked like. And these 67 GB of data from the LucasArts archives contained many clues about how everything worked in the original. They were a great start for us.
The process of creating original FMV
It may sound a bit nostalgic, but I think it is important to discuss aspects of the “digital archeology” of such remastering of games. In the end, an understanding of the process of creating the original to allow to answer many questions and give hints about how resources turned into a finished result. And when creating new converted FMVs, we need to apply the same transformations to our original remastered resources so that the finished product looks as close as possible to the original. Including we needed the following:
- The location of the audio tracks in the timeline
- Volume and pan settings for audio tracks when playing in a game
- Frame composition and placement of each video frame in the finished product
A tool called SMUSHFT (SMUSH for Full Throttle) allowed the creator of FMV to place video and audio resources on the timeline, and then encode the resulting FMV-movie (in .san format), which was read by the game engine. All videos were divided into a series of frames that were glued to create the finished result. SMUSHFT allowed the user to visually move these resources along the timeline and, if necessary, redo video.
It is possible and not to mention that I did not participate in the creation of the original game. I could only guess how the original resources were created by studying the archived data and looking at the formats and executable files packed in this data. It seems that 3D-models were created in Autodesk 3D Studio Release 3, and hand-drawn parts in DeluxePaint Animation v1.0. I also do not know which of the stages involved the generation of waveform data for audio, but each audio clip used (in .sad format) contains information about volume and panning over key frames used for mixing sound during game execution.
The process of creating original FMVAfter creating these separate parts of the frame, the process of combining the frame was performed. This process combined 3D renders of the frame with hand-drawn animation frames (along with everything else), creating the finished product used by the SMUSHFT tool (.nut files). After the project was ready for encoding, the video was processed and the finished result (.san) could already be played in the game engine.
SMUSHFT performed the final encoding of the original video file format (.san), and each video file had a project file (.pro) describing the video assembly (sound, video, subtitle layout). We wanted to extract this information so that we could generate an Adobe Premiere Pro project file and use it to encode a converted version of the video in 4K resolution. To do this, we had to perform reverse-engineering of the SMUSHFT project file.
Reverse engineering file formats
It's great to have the source code, because you can simply study it and understand how the project file was created / read. Without the source code, you have to open the project file in a hex editor and analyze the patterns inside the file. We used this method to extract useful content from the SMUSHFT project file.
Since we could run the original SMUSHFT in DOSBox, we saw the user interface of the program, which gave us hints about the file format. Look at this screenshot for opening the original .pro file:
Sample SMUSHFT projectHere you can notice the following: the file has named resources (2027.NUT, 2027.SAD, IN_06A.NUT, etc.). Such named resources most likely inside the file will be displayed in ASCII characters. In addition, there are frame counters in the upper part of the timeline, and to the left of the scale there are increasing layer numbers. And the last - each resource on the time scale is on a specific frame number and has a certain duration. If we can extract this information from the original project files, it will allow us to find out where to automatically locate new resources on the Adobe Premiere Pro timeline.
Sample Adobe Premiere Pro ProjectHaving opened the original project file in the hex editor, you can get quite useful information. Look at the example shown above in hexadecimal:
SMUSHFT project file in Hex editorWe can start browsing the .pro file with a hex editor (I prefer Hexplorer) and try searching for patterns. It is easy to find named resources in ASCII format with a zero byte at the end. Approximately in the same memory area there is a group of values ​​stored as shorts (double-byte integer). Comparison of the numbers displayed in the SMUSHFT tool with
using numbers from a project file in hex format gives us the basis for correctly converting the original project file into a modern video editor like Adobe Premiere Pro.
A set of tools for automation
The main part of this workflow was automated and did not require human intervention. One of the reasons for this was that the contents of all videos were completely copied from the original; in fact, we just did a content upgrade. And therefore we practically had no opportunity to completely change the format of FMV. We only needed to find a way to recreate the video using high-resolution resources, while minimizing the time spent working on the product.
First, it must be said that a serious initial step before automating the entire process should be a conversation with a team of content creators (graphics and audio). The reason is that most automation processes require creators to adhere to a specific set of rules about project preparation, file layout, tools used, etc. In our project, this meant that we had to discuss the tools for creating content of hand-drawn frames, 3D models and sounds, and then a video editor to assemble all of this together. It was also necessary to agree on which parts of the workflow would be performed manually and which parts would be automated.
As a result, we decided the following:
- Manually drawn frames will be created in Adobe Animate with a resolution of 4440x2400 pixels
- 3D models and animations will be created in Autodesk Maya and rendered manually, also with a resolution of 4440x2400 pixels.
- Audio files will be created in .wav format with 48 kHz and 16-bit parameters
- Fragments of the video will initially be generated automatically, and the artist will be able to change any part he needs (with some exceptions)
- The final steps of FMV stitching and coding will be automated.
To make the tools as automated as possible, we used several methods. Python was chosen as the “glue” connecting everything together, because it is well expanded by various libraries, and the code is easy to write and maintain. We also took advantage of its internal support for platform-independent file manipulation (copying, moving, deleting).
Python - calling executable files, getting results
We are ideally suited to the Python subprocess library, because it allows you to change the execution of other executable files and even wait for the completion of their tasks. It allows you to get the code returned by the program and access the stdout & stderr buffer.
import subprocess
An example of interaction with executable files in PythonPython - Win32 API
The Win32 API is very useful because it gave us access to transmitting Windows keyboard and mouse messages from a script. For example, you can create a function that clicks the mouse at certain screen X, Y coordinates:
import win32api def ClickXY(x,y): win32api.SetCursorPos((x,y)) win32api.mouse_event(win32con.MOUSEEVENTF_LEFTDOWN,x,y,0,0) win32api.mouse_event(win32con.MOUSEEVENTF_LEFTUP,x,y,0,0)
An example of a mouse click simulation in PythonYou can even send keypress events (with or without modifiers):
import win32api import win32con def PressKey(code, modifierCode=None): if modifierCode: win32api.keybd_event(modifierCode, 0, 0, 0) win32api.keybd_event(code, 0, win32con.KEYEVENTF_EXTENDEDKEY | 0, 0) time.sleep(0.021) win32api.keybd_event(code, 0, win32con.KEYEVENTF_EXTENDEDKEY | win32con.KEYEVENTF_KEYUP, 0) if modifierCode: win32api.keybd_event(modifierCode, 0, win32con.KEYEVENTF_KEYUP, 0)
An example of simulating keyboard clicks on PythonThere are many other possibilities, but the examples shown above helped a lot to achieve our goals. You can send keyboard events to any active program and it will start typing them, as if we were typing something from the keyboard, including pressing hot keys.
Python - computer vision for clicking on buttons
The most unique experience was the use of computer vision software in those tools that could not be automated through internal scripts. Most modern tools have scripting support, but still require user intervention. For example, 3D Studio Max allows you to run MAXScript files from the command line. In our case, we run a script for automatically importing a 3D-mesh file, after which 3D Studio Max automatically starts up and displays the Shape Import dialog box, in which the user should click the buttons:
Sample Import Shape Dialog BoxSo - we wrote a script for automation, and now we have to sit in front of the screen, pressing the keys? Instead of sitting at the keyboard and waiting for the pop-up window to appear, we can force the script to take a screenshot, use the OpenCV link to Python to search for the button image template and automatically click on it. Here is what the image template looks like for the above example.
Image template for ok_button.pngIt is worth noting that the image template contains additional features (text for “Single Object” and “Multiple Objects”). This allows us to get a more deterministic search result. Below is an example of a Python script used to automatically click on a found template image location:
import cv2 import ImageGrab
Example of clicking on a screen element using OpenCV, written in PythonAll the examples above are based on Python. But there are cases when we need more precise control over the window system of the Windows OS. This led us to develop native tools using the Windows Automation API.
Windows Native (C ++) - Windows Automation API
The Windows Automation API provides access to the outdated Microsoft Active Accessibility API (MSAA), as well as the Microsoft UI Automation API. Read more about this on
the Microsoft page .
As a result, we achieved that we were able to make requests to certain elements of the Windows interface (buttons, text fields, tabs, menu items), find out where these elements are spatially located on the screen and press them / interact with them. The Windows SDK also has testing tools that allow you to see which properties are accessed. They allowed us to figure out what can be automated in each specific program.
The Inspect.exe application is quite useful for displaying the window management hierarchy of the program; It allows you to get a rough idea of ​​where objects such as menu controls are located, and how to reference window elements using automation API calls.
Inspect.exe exampleAfter you learn the management hierarchy of a Windows program, you will know how to find it from the main window handle and learn how to click on different elements through the API:
#include <WinUser.h> #include <UIAutomation.h> // Click on a sub-menu item given the Window & Menu handles. void ClickSubMenu(HWND hwnd, HMENU hmenu, const char *pMenuName) { // Iterate through the menu items of the window int menu_item_count = GetMenuItemCount(hmenu); for(int menu_id = 0; menu_id < menu_item_count; ++menu_id) { char menu_name[MAX_PATH]; int len = GetMenuString(hmenu, menu_id, reinterpret_cast<LPSTR>(&menu_name[0]), sizeof(menu_name), MF_BYPOSITION); // Look for the specific menu you're searching for and click it // Make sure to set the window active before doing it... if(!strcmp(pMenuName, menu_name)) { // now get the rect and click the center RECT rect; BOOL success = GetMenuItemRect(hwnd, hmenu, menu_id, &rect); if(success) { SetActiveWindow(hwnd); POINT point = GetMiddlePoint(rect); SetCursorPos(point.x, point.y); mouse_event(MOUSEEVENTF_ABSOLUTE | MOUSEEVENTF_LEFTDOWN, point.x, point.y, 0, 0); mouse_event(MOUSEEVENTF_ABSOLUTE | MOUSEEVENTF_LEFTUP, point.x, point.y, 0, 0); Sleep(DO_TASK_INTERVAL_WAIT_MS); } } } }
An example of clicking a window control in C ++And, of course, the transfer of keyboard presses to the active window is also simple:
#include <WinUser.h> #include <UIAutomation.h> // Type the character string to the given window handle static void TypeCharacters(HWND window_handle, const char *pString) { int len = strlen(pString); for(int count = 0; count < len; ++count) { SendMessage(window_handle, WM_CHAR, (WPARAM)pString[count], (LPARAM)0); Sleep(CHARACTER_REPEAT_INTERVAL_MS); } }
An example of simulating keystrokes in C ++Of course, these APIs have much more features. I found out that thanks to the Inspect.exe tool you can shed light on which elements of the program window can be accessed.
Intermediate text formats
Part of our workflow was to save the files in text form and change the values ​​in these text files. In the end, the tools have a user interface for changing the status of auxiliary data. And if you know what these auxiliary data should be, then it is not necessary to work with the tool, you just need to change the auxiliary data. The trick is to know how to manipulate this ancillary data; when changing proprietary file formats, this can be a daunting task. Wouldn't it be great if everyone had a simple text file to work with?
The trick is to find a way to bypass the proprietary file formats used by most tools. The solution is usually to use the Import and Export options present in most modern commercial tools. Here are some examples:
Adobe Premiere Pro saves files in a proprietary format, but you can import / export projects as Final Cut Pro XML. After exporting to XML, you can change the XML you need and re-import the project back into Adobe Premiere Pro.
Another example is fixing references to textures used in the obsolete Autodesk 3D Studio Release 3 3D mesh format. When importing the original mesh file, we save the newly converted mesh into an intermediate .fbx file using ASCII characters. In this format, you can process a text file and replace all texture reference lines with the correct ones.
Adobe Animate / Flash is pretty funny, because it turns out that .fla files are actually slightly “broken” .zip files. In uncompressed form, they are stored in XFL format, which can refer to other XFL objects (for example, bitmaps) from a local folder. Lead engineer at Double Fine, Oliver Franzke, created a modified Python script for packing / unpacking .fla files using ZIP, so that we could create / modify these files.
Examples of using
3D Studio Max
The current version of 3D Studio Max was used to import the original .prj file into the scene and to save it in ASCII format .fbx. For each .prj file that needed to be converted, a MaxScript file (.ms) was automatically generated from the Python script, which looked like this:
importFile "G:\FullThrottle_Backup\FullThrottle_SourceAssets\BENBIKE.PRJ"
Example of importing 3d models using MaxScriptAfter that, this .ms file was simply invoked by the Python command to run in 3dsmax.exe:
3dsmax.exe -U MAXScript "C:\FullThrottleRemastered\import_prj.ms"
Sample console command to call the executable file with the specified MaxScript fileAs mentioned above, in this case a dialog box appeared in 3D Studio Max that you had to click on. A bunch of OpenCV with Python helped to click a button in this window so that the original file is imported without user intervention. After importing the file, a series of menu keys was pressed (using win32api Python) to launch another MAXScript file, which exported the model as an .fbx file in ASCII format. Since .fbx was saved as a plain text file, all the lines of texture dependencies of the model were replaced with links to images in a modern format. Then the modified .fbx file was again automatically loaded into 3DSMax and exported as a .max file. At this stage, the .max file could be sent to the artist for remastering.
Adobe Animate / Flash
Adobe Animate / Flash was used to remaster all the hand-drawn FMV resources. We took the original hand-drawn frames (320x200 pixels) found by the SanWrangler tool and used them as “assembly drawings”. The image scale was increased to fit the size of 4440x2400 pixels, after which a .fla file was automatically generated using a Python script.
Then it was enough to automatically generate the .fla file from scratch, using our knowledge of the XFL format Adobe Animate / Flash. We were able to use the toolbox, already created by Oliver Franzke, to generate assembly drawings of hand-drawn animation files.
Adobe Premiere Pro
The Windows Automation API helped us a lot to determine which Premiere Pro controls are on the screen. In some cases, they did not have hotkeys. After receiving the coordinates of the menu items, it was necessary to move the cursor to these coordinates and send a mouse click event.
All this is great, but some controls are rendered in other ways, and therefore invisible to the Windows Automation API. For this case, we decided to use a bunch of OpenCV and Python to be able to use OpenCV in a scripting environment. This was especially useful when working with Adobe Premiere Pro: although it has partial support for JavaScript scripts, the type of control you need was not available through the API.
In addition, Adobe Premiere Pro project files are stored in proprietary binary format. Consequently, we couldn’t just magically create a Premiere Pro file, but we could use the import function, which allowed us to import data into an XML file of Final Cut Pro. Then it was enough to generate the correct XML file that properly placed all the resources on the timeline, and then automatically import this .xml Final Cut Pro to convert it to the desired format. Then we could put the exported frames into an automated queue so that we could combine them into a finished video.
All stages
Below is a generalized block diagram showing all the automated parts of the new workflow. Each automated segment is surrounded by a rounded rectangle with additional information about the automation techniques used.
Simplified flowchart of automation Remastered FMVYou may notice that most of the work with Adobe Premiere Pro requires the use of Python, as well as specialized native Windows code. The reason is the complex structure of the Premiere Pro windows, as well as the need to use the native Windows Automation API to ensure proper interaction with all dependent child windows of this application.
Together
Using the methods described above, we were able to set up several automation machines to break up the work on all the videos. Also, Slack Bot was integrated into the workflow to send feedback about automation to our Slack channel on the state of the video passing through the processing pipeline so that we know when something goes wrong.
Automation Example Adobe Premiere ProThe problems we are facing
It all sounds great, but in fact, during the implementation of the project, we ran into problems. I will list only the main points.
1) Iterate the mix of finished audio. All audio files were remastered gradually. Therefore, when we had, for example, the “BOOM!” Sound effect, the sound engineer had no idea where to insert it in the audio mix, so he had to wait for the video encoding to find out what it was like.
2) Storage of uncompressed intermediate files. Frames were stored in an uncompressed format until the very last moment of encoding into a finished video. Therefore, we had to store in the local storage a large number of frames, some of which were stored in the version control system. This expansion of the stored volume was very noticeable and with some version control systems it can be quite expensive (we used Perforce).
3) Runtime. A solid part of the workflow was automated, and this allowed the engineers to do other things. However, the creation of a single video can be quite long. The most time-consuming part is frame coding in 4k resolution. We had ways to examine the state of resources within Perforce to understand which steps need to be done anew, but this method was not so divided into parts as we would like.
Next steps
Yes, the article was voluminous! Although our implementation of this workflow is quite specific to the project, I believe that individual automation methods can be used in the development of any game. Having dealt with the video, you can consider a related topic - playing FMV during the execution of the game. This includes issues such as encoding a multilingual audio stream, as well as frame synchronization when playing back the original FMV. Wait for the third part of the article!