brettlajzer.com http://www.brettlajzer.com Brett Lajzer's personal blog. brett@brettlajzer.com DLP Slicing Wed, 23 Nov 2016 15:08:33 UTC http://www.brettlajzer.com/40 <p>A couple months ago I was talking to a former coworker, <a href="http://chadhamlet.blogspot.com/">Chad Hamlet</a>, about 3D printing and he brought up a gripe he had with his workflow where it would take hours (potentially 12+ hours) to generate the slices he needed to use as input for his DLP printer. To put things into perspective, the actual prints take as long, if not much longer depending on what's being printed. My kneejerk reaction was that this was ridiculous and that he should be able to generate slices in a significantly shorter time, especially with GPU acceleration. He mentioned that there was a <a href="https://github.com/formlabs/hackathon-slicer">prototype app</a> that was done during a <a href="https://formlabs.com/blog/open-source-dlp-slicer/">hackathon at Formlabs</a>, but he couldn't get it working for large models with millions of polygons (his ZBrush output). With the knowledge of how that slicer worked, I started on making a standalone one in C++ that could handle his workloads.</p> <h2>Slice Generation</h2> <p>For the slice generation algorithm to work, the model has to be water-tight (<a href="https://en.wikipedia.org/wiki/Manifold">manifold</a>). This means that there aren't any discontinuities in it, the reason for which should become clear soon. This presents some problems when printing because it normally means that the user has to do a bunch of CSG operations to carve out holes for the resin to drip out of. As it turns out, there's actually some tricks that you can use to make holes without doing CSG. Additionally, the model can have multiple overlapping and intersecting pieces as long as each piece is water-tight.</p> <p>The general idea of the algorithm is to continuously slice through the model taking note of what's "inside" and "outside". To do this analytically, it would be insanely expensive and take a really long time and you'd probably use something like raytracing. Instead we use the GPU and the stencil buffer to accelerate this, since most GPUs can render millions of triangles in a couple milliseconds.</p> <p>To start out, let's envision the intial setup of the world. There's a model, in this case a hollow sphere, positioned in the center of our camera volume, which we'll take to represent the print volume. We're seeing this from the side, and the left is the bottom (far plane of the camera) and the right the top (near plane). To generate the slices, we're going to render this model while sliding the camera volume toward the top end (but not change the size or shape of the volume). Our rendered slice will look like the intersection of the bottom of this volume and the model.</p> <figure id="fig_setup"> <img src="/images/slicer/setup.png" class="centered" style="max-width: 50%"> <figcaption>The initial setup of the world.</figcaption> </figure> <p>In this implementation, we'll be using the stencil buffer, which is a pretty old hardware feature that allows us to do math by rendering geometry. If you've played Doom 3 or any idTech 4 games, you've seen this in action as they use the stencil buffer to render out their shadows. In fact, they are also concerned with knowing the inside of models versus the outside. For this algorithm, we're going to start by disabling face culling; normally you'd only want to render front faces but we need both front and back faces. Then, we configure the stencil buffer operations such that whenever a front face is rendered, we decrement the value in the buffer while wrapping around (the stencil buffer holds unsigned integers), and when a back face is rendered, we increment and wrap. What this means is that any geometry that doesn't have a matching "other side" will leave a non-zero value in the stencil buffer. Since we're slicing through the model, the intersection of the model and the far plane will do exactly that. To get the actual rendered slice, we then just need to draw a white plane masked against the non-zero stencil buffer.</p> <figure id="fig_slice"> <img src="/images/slicer/slice.png" class="centered" style="max-width: 50%"> <figcaption>What a slice looks like from the view of the stencil buffer.</figcaption> </figure> <h2>Optimization, Improvements, Tricks</h2> <p>The Formlabs implementation does quite a bit of unnecessary work, namely rendering the model three times: once with front-faces, once with back-faces, and then a third time to actually render the slice. In my implementation I only render it once because it's possible to configure the hardware to handle front and back faces in one pass. The third pass is also complete overkill since you just need to mask <em>something</em> against the stencil buffer. A single, fullscreen triangle is enough here; there's no need to re-render the model. For a tiny model this optimization won't really make a difference, but for something with millions of triangles, it's the difference between 5 minutes and 15.</p> <p>Something my coworker pointed out was that adding antialiasing to the slices would result in various levels of partially cured resin on the edges of the object, meaning that you can get a significantly smoother surface versus just black and white. To this end, I added support for using MSAA for antialiasing. The program will detect and clamp this setting accordingly, but there are some broken drivers out there that report being capable of MSAA but then crash when actually using it.</p> <p>The final slices get saved out to disk as PNGs. Something worth noting is that PNG has a subformat for 8-bit per channel greyscale images. Since we're going to be rendering greyscale images, it's important to use this instead of the standard RGBA, 32-bit format. This will both cut down the amount of disk space required and the amount of time it takes to compress the slices.</p> <p>Some miscellaneous features include being able to specify all of the parameters of the printer in the config file, being able to scale the model (useful for testing or if you're not modelling in the same units as the output), and validation of the model against the print volume.</p> <p>I mentioned earlier that it's possible to avoid doing CSG operations on the mesh but still punch holes in it, which is useful when you want to duplicate, scale, and invert the model to make a shell. To do so, you duplicate the polygons on both surfaces where you want the hole to be and invert the normals of each side. This will, in effect, make it so that that part of the surface always has both a front and back face, leaving an opening. These don't have to be manifold as long as their edges are aligned on the up axis (Z in the case of this program).</p> <h2>Results</h2> <p>On my GTX 1080, I've timed a five million triangle model as taking around three minutes to slice (~4000 slices). This is over 240 times faster than the software that my coworker was previously using. So I consider all of this to be pretty successful. I was originally going to make the program multithreaded so the CPU could build up a frame or two of data while waiting for the GPU to render, and also make the image compression and saving happen in a different thread. These could be added and it would generate the slices even faster than it does now. Switching to a modern API like Vulkan (it uses OpenGL right now) would enable further speed increases since transferring the rendered image back to the CPU could be done in an asynchronous way (it's synchronous in GL). I'll leave these as exercises for the reader.</p> <p>Chad sent me a bunch of pics (you can see <a href="https://chadhamlet.blogspot.com/">more on his blog</a>) and I've reproduced a few of them here to show off the nice result that he was able to get.</p> <table style="margin: auto;"> <tr> <td><img src="/images/slicer/strider_printing.jpg" style="max-width: 100%;"/></td> <td><img src="/images/slicer/strider_leg.jpg" style="max-width: 100%;"/></td> <td><img src="/images/slicer/finished_strider.jpg" style="max-width: 100%;"/></td> </tr> </table> <h2>Get It</h2> <p>I'll be making a packaged version available soon, but I need to draft up an appropriate freeware license. If you're a programmer, though, you could pretty easily write your own version from this description. Feel free to <a href="https://brettlajzer.com/contact.html">contact me</a> with any questions about how it works.</p> <br /><a href="http://www.brettlajzer.com/40">view</a> dib - future Thu, 06 Oct 2016 14:12:42 UTC http://www.brettlajzer.com/39 <p>In the <a href="/38">previous post</a>, and the <a href="/37">one before that</a> I talked about the architecture and motivations for writing my build system, <a href="https://github.com/blajzer/dib">dib</a>. In this post I'm going to go over some sticking points, bugs, and missing functionality that I'm planning on remedying in the future.</p> <h2>General Items</h2> <ul> <li><strong>Stop on Failure/Partial Success</strong> - Currently, dib won't stop on failure immediately and won't record partial successes in the database. This is pretty awful, especially for larger codebases since it will do full rebuilds of the affected Target until it builds correctly. This should be relatively easy to fix, but it's going to be a bit of a nuanced solution.</li> <li><strong>Reduce Database Size</strong> - The various databases are larger than they need to be right now. I'm using Data.Text as keys in a lot of places and should be using hashes instead since 32 and even 64 bits is significantly smaller than the smallest file paths will be.</li> <li><strong>Target Validation</strong> - There are some obvious aspects of Targets that should be validated: ensuring that there's at least one Stage and one Gatherer. Fixing this at the type level is the obvious best answer, and it looks like there's already a library for non-empty lists. There might be other things that can be put through validation too, but I need to investigate more here.</li> <li><strong>Dependency Caching</strong> - There's no caching at the dep scanner level right now and it leads to a lot of repeated work, especially if one of the dependencies has a significant number of its own dependencies. I don't have a solid architecture for how I want to handle this yet; I'm leaning toward changing the type signature of the DepScanner function to include the database, but I think it needs something stronger, probably involving the StateT monad.</li> <li><strong>Windows Fixes</strong> - I have a local fix for this that I need to push, but currently the dib driver program doesn't make the correct commandline to be executed by "system", so it fails to run the actual build. There might be other issues with Windows that I don't know about; I don't ever test dib there.</li> </ul> <h2>C Builder Items</h2> <ul> <li><strong>Better C/C++ Flags Separation</strong> - There's no separation between the flags that are used for C files and the flags used for C++ files. This should be an easy fix with minimal overhead.</li> <li><strong>Link Flag Changes Shouldn't Cause Full Rebuilds</strong> - Right now, if the user changes any of the options to the C Builder, it will cause a full Target rebuild. This is obviously not a good thing. I think the solution for this will be adding per-stage hashes which influence the compilation similar to how the Target hash currently does.</li> </ul> <h2>Possible Extensions</h2> <p>These are not guaranteed features, but are instead things that could end up in dib, depending largely on how much time and effort I'm willing to put in for them.</p> <ul> <li><strong>Retrieving Output of a Target's Last Stage in a Subsequent Target</strong> - At the moment, Targets can't send any information from one to another. I could see this being used to inform a further Target where a library was built to, or something of that sort. However, I'm not really convinced this is a worthwhile feature, since the user can already predict where the output will end up and modify the Targets accordingly. It's just a possibility I've been throwing around.</li> <li><strong>Output Caching</strong> - Some transforms can be expensive to build: compression of textures, videos, archiving operations, etc... In a multiple user situation it would be advantageous to cache these files somewhere so other users don't have to endure the lengthy build process if the data hasn't changed. Being able to have a shared network location (folder, webDAV, ftp) where this cache lives and being able to pull from it would be a really useful feature. This would currently be a lot of work and would require hashing the actual file contents instead of just checking timestamps. I'd want to implement it as an optional feature that could be enabled per-Target.</li> </ul> <br /><a href="http://www.brettlajzer.com/39">view</a> dib - architecture Thu, 29 Sep 2016 14:47:53 UTC http://www.brettlajzer.com/38 <p><a href="/37">Last time</a>, I talked about my motivations for writing <a href="https://github.com/blajzer/dib">dib</a>, my personal build system. This time we're going to examine in depth the underlying architecture, with a focus on the types and execution.</p> <h2>Types</h2> <p>There are four fundamental types that form the structure of a build in dib (in order of increasing abstractness): <em>SrcTransform</em>, <em>Gatherer</em>, <em>Stage</em>, and <em>Target</em>. The first of these, <a href="#fig1"> <em>SrcTransform</em></a> represents the input and output of a command. There are four type constructors which represent the possible relationships of input to output:</p> <ul> <li><em>OneToOne</em> - e.g. copying a file from one place to another, or building a .cpp into a .o file.</li> <li><em>OneToMany</em> - e.g. extracting an archive, or writing out a converted file and some metadata</li> <li><em>ManyToOne</em> - e.g. linking .o files into an executable, or archiving a bunch of files.</li> <li><em>ManyToMany</em> - I don't have any accessible examples for this, but I've seen use cases in my professional work.</li> </ul> <figure id = "fig1"> <img src="/images/dib_arch/srctransform.png" class="centered" style="max-width: 50%"/> <figcaption>Fig. 1 - The four possible <em>SrcTransform</em>s</figcaption> </figure> <h2>Pipeline Overview</h2> <p><em>SrcTransform</em>s are the actual data that the build is processing. They are the input to the entire process and are transformed as they move through each segment of the <a href="#fig2">pipeline</a>. They are initially generated by <em>Gatherer</em>s. The Gatherers provided with dib only produce <em>OneToOne</em> transforms, the input being the files they gathered, with an empty string as the output. A <em>Target</em> can have more than one Gatherer; the output of each will be combined into a single list that is passed into the first <em>Stage</em>. Each of these Stages then does processing on the transforms that are passed in and passes them to the next Stage.</p> <figure id = "fig2"> <img src="/images/dib_arch/pipeline.png" class="centered" style="max-width: 50%"/> <figcaption>Fig. 2 - High-level pipeline overview</figcaption> </figure> <h2>Stages</h2> <p>Each Stage takes as input a list of SrcTransforms and outputs either a list of SrcTransforms or an error string. At the beginning of every Stage sits an <em>InputTransformer</em>: a function that transforms the list of SrcTransforms into another list suitable for that Stage to process. In contrast to the other parts of a Stage (as we'll soon see), this operates on the entire list to easily enable collation. The built-in C/C++ builder, for example, collates the list of OneToOne transforms of object files into a ManyToOne of object files to executable/library.</p> <p>After passing through the InputTransformer, each SrcTransform is individually passsed into a DepScanner, an IO action that takes a SrcTransform and produces a SrcTransform. In the case of the C/C++ builder this is the CDepScanner, which recursively scrapes the includes for further, unique includes. It changes the input OneToOne transforms into ManyToOne and adds the dependencies after the actual source file to be built. When processing a Stage, the timestamps of the input files of each transform are checked to determine if the transform should be built. By adding the dependencies to the transform, the system takes care of rebuilding that transform when they change, for free.</p> <p>The final piece of the Stage is the StageFunc, which is the actual business logic that executes the transform. This is a function that takes in a SrcTransform and returns either a SrcTransform or an error message. The returned SrcTransform should be one that is suitable to pass into the next stage. For the compilation stage of the C/C++ builder, this will be a OneToOne containing the object file. This whole process continues for each successive Stage.</p> <figure id = "fig3"> <img src="/images/dib_arch/stage_flow.png" class="centered" style="max-width: 50%"/> <figcaption>Fig. 3 - The flow of data within a <em>Stage</em></figcaption> </figure> <h2>Targets</h2> <p>All of the previous pieces are encapsulated in the <em>Target</em> data type. A Target represents the input and final output product as a single unit. For example, a library or executable would each be a single Target; so too would the operation of copying a directory to a different location. A Target consists of a name string, a ChecksumFunc, a list of dependencies, a list of Stages in the order they are to be executed, and a list of Gatherers.</p> <p>The name of a Target must be unique &mdash; not having unique names for all Targets, even if the difference is a debug build versus a release build, will cause unnecessary rebuilds. Therefore, if there are parameters that users can provide to change aspects of the build, those should be encoded into the Target name. The ChecksumFunc calculates a hash of parameters to determine if the Target should be force-rebuilt. As an example, changing the compile or link flags in the C/C++ builder will cause the checksum to change and rebuild the whole Target.</p> <figure id = "fig4"> <img src="/images/dib_arch/target.png" class="centered" style="max-width: 50%"/> <figcaption>Fig. 4 - Anatomy of a <em>Target</em></figcaption> </figure> <h2>Execution Strategy</h2> <p>The original execution strategy for building transforms was relatively simple: spawn n futures (where n = number of cores), store those in a list, and have a list of the remaining transforms. Wait on the first item in the futures list and when it finishes, gather up all of the finished futures, check for errors, and spawn up to n again. Repeat until done. This strategy has two major problems. The first, more obvious one is that if the future being waited on takes longer than the rest in the list, there will be a lot of time during which cores are idle. The second issue only rears its head due to the garbage-collected nature of GHC Haskell. Making and updating these lists so often causes a massive amount of garbage to be created, so much so that for a build of a C++ codebase with 100 or so translation units, over a gigabyte of garbage was being generated.</p> <p>This led me to write the current execution strategy, the code for which is more nuanced, but has better occupancy and generates significantly less garbage. I'm going to avoid getting into too much detail here &mdash; refer to the code for the exact implementation. The general idea is that there is a queue inside of an MVar, and instead of having implicit threads (previously represented with futures) to do work, there are explicit worker threads. Each of these workers grabs the queue from the MVar, peels off an item, and then puts the rest back. When there's nothing left, the worker is done and signals this to the main thread. When all threads are done, execution stops and the final result is returned.</p> <h2>Next Time</h2> <p>Hopefully this has been an enlighting look at how dib works internally. The ideas behind it are fairly simple and straight-forward, even if the implementation is a bit tricky. I opted to leave out one topic, and that's how the database (which tracks timestamps and hashes) works. In the next and final post, we'll be looking at various areas that could stand to be improved and some thoughts on how to improve them.</p><br /><a href="http://www.brettlajzer.com/38">view</a> dib Fri, 16 Sep 2016 17:54:15 UTC http://www.brettlajzer.com/37 <p>After putting it off for years at this point, I finally posted the build system (dib) that I've been working on since 2010 <a href="https://github.com/blajzer/dib">up on GitHub</a>. It's probably not the greatest example of Haskell code out there, since it was my first large project, but I've been slowly improving it over the years and I've tried to stay up to date with the language as much as possible. This has been an entirely free-time project, and as such, it's only been motivated by my current needs at the time. What follows is a bunch of information on why I wrote it. Coming next time: a breakdown of the architecture.</p> <h2>Background</h2> <p>I'd been fed up with the state of build systems for years when I started the project. I liked the ubiquitiousness of <a href="https://en.wikipedia.org/wiki/Make_(software)">Make</a>, but the syntax, quirks, and difficulty of writing a simple Makefile to build a tree of source turned me off to it. I would use it for really simple projects, but it was a massive hassle for anything more complicated. I turned to <a href="http://scons.org/">Scons</a> and <a href="https://waf.io/">Waf</a> after that, but both of them were overly complicated for what I considered simple builds (it's been a really long time since I've looked at them so maybe that's changed). I did use Scons for an old Lua-based game engine I wrote, <a href="http://repo.or.cz/w/luagame.git">Luagame</a>, and it was pretty successful there.</p> <p>When I got a professional programming job, we used extrememly complicated Makefiles for code builds and <a href="https://en.wikipedia.org/wiki/Perforce_Jam">Jam</a> for data builds. If you've ever worked with Jam, recall that it has the most inane syntax and convoluted methods of building things of possibly all serious build systems. When I changed jobs, the company I went to work for was using Jam for doing code builds, and that might be one of the most complicated build setups I've ever seen. To put things into perspective: adding a Jamfile for a new library might only take a half hour or so; copy-paste from another library and change the directories and names in it. However, there's a 99% chance that you made a non-obvious mistake like naming your directory with embedded upper case letters, accidentally not putting a space before a semicolon, or something even more obscure related to the way the system lumped files together into <a href="https://en.wikipedia.org/wiki/Single_Compilation_Unit">single compilation units</a> per n library files to try to improve compilation speeds. Suffice to say, I don't like Jam.</p> <h2>Goals</h2> <p>I finally got fed up enough that in 2010 I decided to take matters into my own hands and I laid out the groundwork for what would eventually become dib. These were the handful of high-level goals I had in mind:</p> <ul> <li><strong>Forward, Not Backward</strong> - the majority of build systems in the wild are <em>backwards, rule-based systems</em>; that is, the user asks for a build product and the build system looks backward from the product to find the input files using rules that the user has set up. It continues doing this until it hits a leaf and then begins executing things from there. This also builds an implicit dependency hierarchy, which is how the build system knows how to order things. In contrast, dib is a <em>forward</em> build system. The user instructs it to take a set of files and do an action with them. The steps in generating an individual product (<em>Target</em>) are coded as a set of <em>Stages</em>. Each stage takes input, does an operation, and passes the result to the next stage.</li> <li><strong>Don't Write a Parser</strong> - a lot of build systems have made what I consider a mistake: the language used to describe the build is bespoke, often with a grammar optimized for the programmer writing the parser and not the user. Make and Jam are both guilty of this, though I accept that that decision was likely motivated by technical constraints at the time of their inception. I chose to embed dib into the Haskell language. While this has the downside of requiring a Haskell compiler, it has the upside of making the build strongly typed and giving the user access to the Haskell library ecosystem. It has also probably shaved years worth of work off the project since I didn't have to write and then continually fix a parser.</li> <li><strong>Try To Be Declarative</strong> - as much as possible, I tried to make the build specification declarative. That is, the user generally only needs to describe the build and not worry about the actual mechanics behind the scenes. With the exeception of writing new builders, this ends up being the case. Most other build systems choose this route, and I think it's the only right way to do it.</li> <li><strong>Don't Do Extra Work</strong> - I rewrote dib twice: there was an initial prototype to prove out some concepts, and a first version that maybe wasn't well throught-out. Part of the way through the first version, I determined that it was silly to attempt to figure out if a target was up-to-date before building it. In a forward build system, it seems to be better to just try to build the target and if nothing has changed then do nothing. That way you only evaluate timestamps/hashes/dependencies a single time versus multiple times.</li> <li><strong>Be Straightforward and Obvious</strong> - this is something that a lot of build systems seem to fail on; once the user understands the primitives that the system offers, the mapping between them and the desired build should be obvious, regardless of the complexity of the build. I personally find forward build systems to offer this level of obviousness versus rule-based systems. In my head, at least, it follows a clearer path of logic: "what steps do I need to do to build this thing?" versus "here's what I want, what intermediate products is it made from and what intermediate products are those made from?"</li> </ul> <h2>Get It</h2> <p>You can grab a copy of <a href="https://github.com/blajzer/dib">dib on GitHub</a>. It's MIT licensed. I haven't uploaded it to <a href="https://hackage.haskell.org/">Hackage</a> yet, but I want to get it up there.</p> <h2>Next Time</h2> <p>In the next post I'll be covering the system internals in much greater depth.</p><br /><a href="http://www.brettlajzer.com/37">view</a> SimCity 2000 DOS Data Formats Sat, 28 Feb 2015 18:15:47 UTC http://www.brettlajzer.com/36 <p>(I've been meaning to write this up for a while.) Around a year and a half ago I was bored and felt like digging around in some game engines because it's interesting to see how people have solved various problems, what formats they use, and also what libraries they use. I ended up focusing on SimCity 2000 for DOS because it's pretty old and I'm not familiar with the limitations of DOS programming. I'm going to include bits of my thought process, so feel free to skim if you want spoilers.</p> <h2>The DAT File</h2> <p>Understanding the SC2000.DAT file is the meat of this post. The GOG version of the game also includes a SC2000SE.DAT file. This is actually a modified ISO of what's on the Special Edition CD-ROM (it doesn't include the Windows version, sadly). ISOs are boring and very documented, so we'll ignore it.</p> <p>After opening up the file in a hex editor, I noticed that there was no header (lack of any identifying words/bytes) and a large portion of the beginning of the file seemed to have a uniform format. Basically, some letters (which looked like filenames) and two shorts; clearly it was an index of some sort. This was a DOS game, so the filenames were all in 8.3 format, which put them at 12 bytes each. They were not C strings, making extracting the index a lot easier. The format is exactly as follows: <br/> <code> <pre> struct Entry { char filename[12]; uint16_t someNumber; uint16_t otherNumber; }; </pre> </code> </p> <p>I scrubbed the file, looking for some indication of how many entries there were in the index, and as far as I can tell there's nothing to explicitly tell the game that. While writing this post, however, I came to the realization that you can calculate the number of entries from the first entry in the index (more on that later). At the time, I just hardcoded how many files there were in the short program I wrote to dump the contents (a nearly 20-year old game isn't likely to change). </p> <p>The next important bit was understanding what the the two numbers after the filename meant. My initial guess was that maybe they were the size and offset of the file in the archive. The first number looked plausibly enough like it could be size, but the second number was confusing. It was really small (0 for the first couple entries), only ever increased, and was the same for a bunch of consecutive entries. I added up the first number for all of the entries and ended up with something much smaller than the 2.5mb that the file is. I was wrong on both counts. </p> <p>My next guess about the second number was that it was some sort of block number. One might think that it was just the 20-bit addressing scheme of segment:offset. That's not right for a number of reasons: <ol> <li>20-bit addressing only handles one megabyte of memory</li> <li>The data file is 2.5mb</li> <li>20-bit addressing segments are only 16-bits each. The potential offset values were much larger than that.</li> </ol> If the first number wasn't a size, perhaps it was an offset of some sort. The first index entry's offset would then be the length of the index. This turned out to be true, and this is how you can calculate the number of index entries (just divide the offset by 16). So then, what was the second number? I tried to find the start points of the various files in order to get some landmarks that I could use to solve for whatever that second value was. As it turns out, the second number is the 64k block that that file starts in and the offset is the offset from the start of that block. The file's start position is then: <code>offset + (block * 64 * 1024)</code>. </p> <p> The final file entry structure looks like this: <code> <pre> struct Entry { char filename[12]; uint16_t offset; uint16_t block; }; </pre> </code> </p> <h2>Dumping the Contents of the DAT</h2> <p>Now that I'd figured out the format, I needed to dump the files. The DAT is tightly packed, so you don't have to worry about alignment or anything like that. Dumping each file is basically just slicing out the bytes from the beginning offset until the offset of the next file (or the end of the DAT if you're on the last entry). The code I wrote to do this is trivial, so this is left as an exercise for the reader. </p> <h2>What's Inside</h2> <p>Part of my initial motivation was getting at the tasty music files inside the archive, so I was hoping they were in a sane, somewhat standard format and not something like an XM or MOD file that had been stripped and rewritten into some other binary format or something similarly custom. As luck would have it, they're run-of-the-mill XMI files which can be easily converted to MID. </p> <p>The file formats inside of the DAT are (in no particular order): <ul> <li>PAL - palette</li> <li>RAW - bitmap</li> <li>FNT - font</li> <li>HED - tileset header</li> <li>DAT - tileset data</li> <li>XMI - music in <a href="http://www.vgmpf.com/Wiki/index.php?title=XMI">Extended MIDI</a> format</li> <li>VOC - sound effect in <a href="http://wiki.multimedia.cx/index.php?title=Creative_Voice">Creative Voice</a> format</li> <li>TXT - strings</li> <li>GM.(AD|OPL) - general midi sound fonts for Adlib and Yamaha OPL</li> </ul> For the purposes of not running long, I'm not going to delve into the non-"standard" formats here. Maybe I'll dig in and document them and the SCURK formats at some later point. </p> <h2>Conclusion</h2> <p>I hope this was as interesting to read as it was for me to discover. My biggest unanswered question at this point is why the index doesn't use a 32-bit unsigned int for the offset from the start of the file. I've fumbled around the Watcom C/C++ docs, and I can't find anything to shed light on this (the game uses DOS4/GW, which was distributed with Watcom). The DOS4/G docs are behind a $49 paywall and I'm not <em>that</em> interested in finding out the answer. </p><br /><a href="http://www.brettlajzer.com/36">view</a> photoing in troy Thu, 17 Jul 2014 02:25:48 UTC http://www.brettlajzer.com/35 <p><img src="http://brettlajzer.com/photos/_data/i/galleries/7/img_0002-me.jpg" style="float: left; width: 200px; margin-right: 1em;"/>I went on a photo walk with some friends from work over the weekend and ended up with about 30 decent images. They're all up on <a href="/photos/index.php?/category/12">my photo gallery</a>.</p><br /><a href="http://www.brettlajzer.com/35">view</a> photos, updates, random stuff Sat, 07 Jun 2014 19:27:35 UTC http://www.brettlajzer.com/34 <p>I got a new camera and had started messing around with flickr again, but I realized after a few weeks of posting photos to it that I should probably find a suitable web gallery, install it here, and put my photos there. So, that's <a href="/photos/">exactly what I did</a>. Just to get things rolling, I uploaded my existing photos pretty much raw. I'm planning on spending some time and organizing things into albums and renaming the photos. I'm also planning on re-theming the site. The software I settled on is called <a href="http://www.piwigo.org">piwigo</a> and it's pretty competent, supoorting both themes and extensions (And multiple users if you're into that). </p> <p>I also decided that having a front page where I post random, meta news about the site was pretty silly, so I've pulled the blog to the front and retired the old front page. This means that the <a href="http://www.brettlajzer.com/rss.php">rss feed</a> now has a new location. </p><br /><a href="http://www.brettlajzer.com/34">view</a> fun with idTech4 Tue, 04 Dec 2012 04:27:29 UTC http://www.brettlajzer.com/33 <p>So I got bored tonight and started <a href="/images/idTech4/">screwing around with a feature of the idTech4 engine</a> where you can do oversampling of a scene that you're taking a screenshot of. Normally, this won't make any difference since everything stays where it is. If you set the r_lightSourceRadius cvar to something other than 0 though, it will jitter the lights randomly every frame. This makes the game look completely insane because the shadows will randomly jump around, unless you're taking an oversampled screenshot. If you oversample (say 128 - 1024) times, you can get a really smooth result out of the shadows. In the end, it looks nothing like idTech4. I'm not sure that I like it more, but it's rather interesting.</p><br /><a href="http://www.brettlajzer.com/33">view</a> Optimizing my Haskell Raytracer Sat, 25 Feb 2012 23:35:55 UTC http://www.brettlajzer.com/32 <p> Way back in 2010 I wrote a really crappy, proof of concept raytracer as a way of familiarizing myself more with Haskell. I picked it back up this morning because I wanted to see how much I could improve the performance of it by doing some pretty simple optimizations. I could probably improve performance further by doing actual structure changes to the program, but I'd rather fall back on algorithmic improvements before fighting with the code generator. At any rate, the final results are satisfying: a 70% decrease in running time overall. </p> <h2>Strictness Annotations</h2> <p> The first optimization I did was to put <a href="http://www.haskell.org/haskellwiki/Performance/Strictness">strictness annotations</a> on the Doubles in the vector type. What started off as: <code> <pre> data Vec3f = Vec3f Double Double Double deriving(Eq,Show) </pre> </code> became: <code> <pre> data Vec3f = Vec3f !Double !Double !Double deriving(Eq,Show) </pre> </code> This resulted in a <em>19% decrease</em> in overall run time. Nothing drastic, but still a very significant difference. </p> <h2>Unboxed Strict Fields</h2> <p> The next optimization was to add the compiler flag "-funbox-strict-fields". What this does is tell GHC to automatically add the UNPACK pragma to strict fields in data constructors. The end result, is that the Vec3f constructor is no longer storing heap pointers to Doubles and instead just storing the Doubles themselves. Unboxing the strict fields brought the total run time decrease to 35%. </p> <h2>Float versus Double</h2> <p> As most people know, Doubles are not very fast on many systems. In the past, I had seen a speed increase by using Doubles instead of Floats in this raytracer. I believe that it had something to do with GHC not using SSE for Float and only for Double. Regardless, switching from Double to Float doubled the total savings. The result was a <em>decrease of 70%</em> run time versus the original. </p> <h2>Code / Full Disclosure</h2> <p> You can download the code here: <a href="/pub/raytracer.zip">raytracer.zip</a>. When run it will generate a file called "output.ppm" containing the rendered image. It should look like this: <img style="float: right;" src="/images/raytracer_output.png" /> <br /> The above tests were done on my Acer Aspire One, which has an Intel Atom N270 1.6GHz HT and 1GB RAM. I'm not sure what the performance differences will be for a 64-bit machine with a better processor. <div style="clear: both; line-height: 0; height: 0;"></div> </p><br /><a href="http://www.brettlajzer.com/32">view</a> Brett Plays Old Games: <em>Deus Ex: Invisible War</em> Sun, 08 Aug 2010 03:57:52 UTC http://www.brettlajzer.com/31 <p><em>This is the first in a series of posts where I play and review old games that were possibly high-profile at the time they were released but have since faded. The majority of these games will be PC games because there are a lot of old PC games that most modern gamers haven't played, and I happen to love PC games.</em></p> <p>Often thought of as the ugly stepchild of the original rather than the direct successor, Deus Ex: Invisible War builds upon Deus Ex in some ways while stepping backwards in others. Released in 2003 and developed by Ion Storm, the game was visually appropriate for the time. Running in a highly-modified version of the Unreal Engine 2, it sports dynamic lighting, stencil shadows, normal mapping, and projected textures. It's reasonably stable, crashing only twice in the three of four hours that I played it. This review only covers those hours, not the whole game, but first impressions are everything, right?</p> <br /><a href="http://www.brettlajzer.com/31">view</a>