Ramblings

iOs Compiler (Bug?): "invalid offset, value too big"

posted Mar 31, 2011, 9:28 AM by Phill Djonov   [ updated Mar 31, 2011, 2:10 PM ]

This is a bug that's bit me a few times already. Basically, the iOS compiler fails to generate some function somewhere in the code file causing it to bail with the error "invalid offset, value too big". Problem is, it doesn't tell you which function it failed on (the numbers surrounding the error seem largely meaningless).

Fixing it is simple, if tedious. Go through each function definition (including C++ methods and Obj-C message handlers), starting with the largest, and comment out the body. If commenting the body out makes the file compile, you've found your culprit. All that's left is to break the function out into several parts and call them all in sequence from the original function. Note that while this bug usually comes up around complex flow-control (giant switch-case statements, hugely nested ifs, etc), that isn't always the case (though perhaps the compiler is failing on some inlined-in flow-control).

I don't know if this is a bug in the compiler's code generator or not (though that seems to be the case). It may just be a limitation of the target platform, in which case the real bug is how useless the error message is.

Rhythm Spirit

posted Jul 9, 2010, 11:59 AM by Phill Djonov

Holy crap, it's finally shipped. Now I may die peacefully...

Pretty damn proud of how it turned out in the end. You can check it out here or on iTunes. Definitely recommend it if you like rhythm games.

WPF: Subtle Binding Crash

posted Jun 8, 2010, 11:22 PM by Phill Djonov

Got a crash in, of all things, System.Windows.Controls.Primitives.Popup.OnWindowResize(Object sender, AutoResizedEventArgs e). A NullReferenceException, to be precise. Ages later, it turns out that this error was actually caused by a binding operation on one of the controls in the popup failing because it was trying to instantiate itself as TwoWay with a read-only source property. Somehow, the binding error managed to turn into the NullReferenceException in the layout pass...

Just throwing that out there for anyone that might be seeing something similarly awful.

Texture vs. Polygon: Round 1

posted Oct 13, 2009, 3:46 PM by Phill Djonov

I recently critiqued some work-in-progress game art for a (I guess) art student, and it struck me that it can be fairly hard for a newcomer to judge where to put their details. Some details don't work very well as geometry. Others don't work well in a texture. Knowing where to put a given mark or line is the difference between effectively using your polygon budget to produce mind-blowingly good scenes and a mess of blurry textures and too-perfect railings.

When working from references, there's one simple rule that helps make a lot of it clear: check how details respond to distance and viewing angle.

Get a close-up shot facing your object dead-on. Get another one from a great distance. Get another one from a middle distance, but from a glancing view angle. Now look, not at the object itself, but at the lines that compose it. Any line that stays both visible and reasonably crisp in all three images must be modeled into the geometry as a hard polygon edge. Any line that vanishes in the distance or at a glancing angle belongs in a texture. Lines that remain visible but become indistinct far away can go in either category - generally keep them geometry if they have to look exceptionally crisp from very close, else paint them into your texture.

Also pay attention to how your object contributes to the overall scene, and how it reacts to the intended lighting. A typical old Parisian building has fairly subtle horizontal detailing between the floors (excepting the ones with the ridiculously bold lines), and you might be tempted to throw it all into a bump map. That's not right though, because the horizontal lines aren't there for the building - they're there for the street. Take a look down a Paris street and it suddenly becomes apparent that the subtle little borders are there to pick up the light, cast small shadows, and create a series of horizontal lines that continue from building to building for as far as you can see. If you paint them into textures the detail will vanish on you much closer than it would in the real world, and your scene will look odd.

One thing to be careful of: make sure the images you base your decisions on are taken from positions that the player is going to actually be looking from (and I mean looking, glancing briefly as you run by chasing an enemy doesn't count); if the player is stuck on the ground, then anything above the third story of a building is effectively a background object (unless your design calls for long, careful upward glances). Don't waste details on it - save that for the final detail pass and add another tree, lamp post, trash can, piece of litter, or sign, etc. You'll do less work, your frame rate will be higher, and your player will feel as though they're in a much cooler and more immersive environment.

How to Make boost::function-like Templates

posted Oct 6, 2009, 3:58 PM by Phill Djonov   [ updated Oct 19, 2011, 2:50 AM ]

If you've dealt with Boost at all, you've certainly seen this at some point:

boost::function< int( int, float, const char* ) > func = ... ;
int x = func( 4, 3.0F, "" );

So how is that done? How does the template "know" what the parameters are when instantiating the operator() method? You might have thought figuring this out would be a simple matter of poking around the boost header files, but you'd be wrong. They are, unfortunately, an incomprehensible mess of preprocessor madness. The reason for this will soon become apparent.

The thing to remember is that there is no magic at all to the above. Once you take the statement apart it becomes very easy to see what's going on. Starting inside the angly brackets:

int( int, float, const char* )

It doesn't look like one at first glance, but that's a type. To be precise, it is an unnamed function prototype with unnamed arguments, which returns an int. The original code could just as easily have been written:

typedef int madfunc( int x, float y, const char* );
boost::function< madfunc > func = ... ;

Note that madfunc is not a pointer to function. You could really never do anything with the type directly (my compiler lets me declare a variable of that type but I can't actually initialize it to anything - which makes sense), but it is something which can validly exist in C++ (and probably even C, though I'm too lazy to double-check right now). And since it is a type, you can use it in a template where a type is expected. And since you can do that, you can also use evil template specialization tricks like this one:

template< typename F >
class myfunc;

template< typename R >
class myfunc< R() >
{
    //...
};

template< typename R, typename P0 >
class myfunc< R( P0 ) >
{
    //...
};

template< typename R, typename P0, typename P1 >
class myfunc< R( P0, P1 ) >
{
    //...
};

//and so on

Yes. That actually works. The (reasonably modern) C++ compiler will actually match your template function to the correct instantiation, figure out all the types, etc, etc. (Kinda makes you feel sorry for the guy that has to write the compiler, no?)

Of course, you need a full specialization for each and every allowable arity (number of function parameters), so this would get tedious rather quickly. That's why the Boost headers are so incomprehensible. It really isn't a clever ploy to keep you from figuring out the magic, it's there to save them work. They have one header defining the entire range of boost::function values, which they include 50 times (or so) with a special macro set to values in the range [0..49], and the preprocessor diligently expands that out into the specialization for zero arguments, for one, for two, etc.

MSBuild and Visual Studio's Hosting of It

posted Aug 13, 2009, 7:07 PM by Phill Djonov   [ updated Sep 10, 2009, 12:45 AM ]

This is an interesting issue that might have bitten you if you've been trying to get code generators to play nicely in C# projects.

You get your generator. You get your project. You open up the project and add an item group:

<ItemGroup>
    <GenerateParser Include="MyGrammar.y" />
    <Compile Include="MyGrammar.parser.cs">
        <DependentUpon>MyGrammar.y</DependentUpon>
    </Compile>
</ItemGroup>

And then you make a target:

<Target Name="GenerateParsers" Inputs="@(GenerateParser)" Outputs="@(GenerateParser->'%(Filename).parser.cs')">
    <Exec Command="gppg.exe @(GenerateParser) &gt; %(Filename).parser.cs" />
    <Touch Files="%(GenerateParser.Filename).parser.cs" /> <!-- probably not necessary... -->
</Target>

And you hook it up:

<PropertyGroup>
    <BuildDependsOn>GenerateParsers;$(BuildDependsOn)</BuildDependsOn>
</PropertyGroup>

You load that project up and so far it looks perfect. You've got your grammar file in the solution tree, with the generated code file neatly tucked away beneath it (like already happens with the .Designer.cs file for WinForms components). You hit build, and your generator only runs when its source is out of date. Perfect, right?

Wrong!

It's wrong because the C# compiler doesn't seem to catch the freshly generated file unless you build and then rebuild the project. It gets even more confusing when you crank up the MSBuild debug level and you see that the compiler is being executed when it should. It's almost as if it were reading the old code file...

Actually, it is. And this is a consequence of the fact that Visual Studio isn't using the MSBuild you read about in the documentation. It's using a special hosted version, which ties into some sort of file-caching mechanism to make project builds go faster. That caching mechanism is, however, broken when it comes to detecting situations like the above. The easiest way to work around it I've found is to add this line into a PropertyGroup somewhere:

<UseHostCompilerIfAvailable>False</UseHostCompilerIfAvailable>

Your build gets a tad slower, but hey - at least it now works properly.

Incidentally: GPPG is an awesome tool.

cgConnectParameter Considered Harmful

posted Aug 3, 2009, 6:49 AM by Phill Djonov   [ updated Aug 3, 2009, 7:06 AM ]

Hooking up a large number of shared effect parameters to a single head parameter via cgConnectParameter is, apparently, not the intended use case. Doing so causes any cgSetParameter call on the head parameter to become orders of magnitude slower (though the amount is proportional to the number of connected effect parameters). This is also the case when deferred parameter setting is used, which is a fairly surprising result, given that it should mean that parameter values don't force any sort of evaluation until something actually goes to read from them.

Note that much of the wasted time may not actually be the fault of cgConnectParameter. I have a strong suspicion that the blame may lie with the implementation of effect-level parameters in general1. Connecting large groups of program parameters may be AOK, though I've yet to test that theory out.

This applies to the April 2009 release of the Cg 2.2 runtime, running on x86 Windows via OpenGL.

1. Perhaps the Cg runtime forces cgSetParameter* to loop through connected effect parameters in case one of them might be involved in a state-assignment expression? If this is the case then it strikes me as amazingly silly to not have the effect runtime follow lazy evaluation rules.

1-7 of 7