Wednesday, December 1, 2010

.Net 3.5 WCF Application + .Net 4.0 Conversion = Missing Config Sections?

As we convert .Net 3.5 projects to .Net 4.0, I had a couple really mysterious issues where WCF Windows Services or command line (console) applications quit working on some servers, and only some environments. Eventually they would get fixed but I hadn't been able to pin down why they quit working in the first place, or what I did to fix them. 


Example: we use MSEntLib logging in this particular WCF Windows Service. The project was converted to .Net 4.0, committed to source control, the continuous integration build server detected the changes, built the project and deployed it to the first test environment. Here's the error it would throw when the build server's deployment routine started the windows service after deployment:


Service cannot be started. System.Configuration.ConfigurationErrorsException: The configuration section for Logging cannot be found in the configuration source.
   at Microsoft.Practices.EnterpriseLibrary.Common.Configuration.ObjectBuilder.EnterpriseLibraryFactory.BuildUp[T](IReadWriteLocator locator, ILifetimeContainer lifetimeContainer, IConfigurationSource configurationSource)
   at Microsoft.Practices.EnterpriseLibrary.Logging.Logger.get_Writer()
   at Microsoft.Practices.EnterpriseLibrary.Logging.Logger.Write(Object message, ICollection`1 categories, Int32 priority, Int32 eventId, TraceEventType severity, String title, IDictionary`2 properties)
   at Microsoft.Practices.EnterpriseLibrary.Logging.Logger.Write(Object message)
   at <snip> - that's enough, you get the picture. 

I looked at the config (“{AppName}.config”), and the logging section was in there. Restart the service, same error. Look for XML errors in the config, find none, mess with it a little, start the service, same error. Rinse and repeat until you've torn it apart so much that when you put it back together it works and you're like "Whuck?" and then you start drinking on the job :(

I finally figured it out - I'm not 100% sure this is accurate, but the best I can figure out is pre-.Net 4.0 - say 2.0-3.5 - on a WCF Windows Service at least and probably for all .net command line (console) applications you could get away with the config file being “{AppName}.config” even though the book says “{AppName}.exe.config”. When we straight converted to .Net 4.0 the non-exe variant became illegal, you have to use  “{AppName}.exe.config” or it won't go. 

-Kelly Schoenhofen


Sunday, November 28, 2010

Anatomy Of A Continuous Integration System Part One

I occasionally give my eleven year old a particular piece of advice - "keep your eye on the prize" - and that's a good way to approach continuous integration. The architecture of a well-functioning msbuild script consists of milestones (significant events) tied together in a linear fashion with a clearly defined "prize" result. That's singular. If you find you need something more complex, you should be cutting your builds up into smaller builds and setting up logical relationships between them.
As I said in part zero I wanted to stay away from philosophy, but I want to talk about that last sentence a little more. Trying to address multiple needed outcomes/deliverables in a single container approaches a worst practice. You will need to check for and track multiple fatal failure conditions and conversely, multiple success conditions, and you're going to fail trying. It's not worth it. You can't effectively serve two masters at the same time - listen to what Yoda said in the movies. A successful msbuild has just one required output, everything else is frosting. Take your uber-complex build monster script, break down the requirements until you have a list of the prizes you need and silo each one into its own build. Capisce?

Back to our build script. Remember wikipedia's definition of CI from part zero of this series?
"small pieces of effort, applied frequently" 
Identify your prize, break it down to the smallest piece of discrete effort, and write it so you can apply it frequently. That's the magic formula. In fact, 99% of the build scripts I write can be boiled down to the same skeleton over and over.  Cleanup, build (clean) and deploy (to Test). Step 1, Step 2 and Step 3. Everything else in your build is window dressing and any failure of those three steps is fatal.
If other factors come into play, such as multiple environmental targets in a single build (deploy to Test, QA, Staging, offsite tape and burn to a DVD gold master), while I may pragmatically add an extra piece or two  at first, it doesn't take very much to push me into cutting one build into multiple builds and chaining them together in some fashion.

Here's a molecular build script - under normal circumstances you don't want to break it down any further than this, and conversely you don't want to get too much more complicated than this. If you're deep in a thorny issue, you happen to step back and take a look at your creation and you don't recognize your msbuild script having a direct, familial relationship with the Clean, Compile, Deploy pseudo-code skeleton below, then you probably made a left turn at Albuquerque along the way and some refactoring needs to occur, perhaps along the lines of creating multiple builds out of your problem build.

 Starter stub skeleton of 99% of my build scripts:


<?xml version="1.0" encoding="utf-8" ?>
<Project DefaultTargets="Publish" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<Target Name="MasterClean">
<Cleanup Actions/>
</Target>

<Target Name="MasterBuild" DependsOnTargets="MasterClean">
<MSBuild (compile) Actions/>
</Target>

<Target Name="Publish" DependsOnTargets="MasterBuild">
<Publish Actions/>
</Target>
</Project>

For troubleshooting purposes, I'll leave three manual cmd files committed into source control in the root of each project; one to kick off the cleanup, one to kick off the compile (which has a prereq of cleanup) and one to kick off publish (which has a prereq of compile, which has a prereq of cleanup - you get the idea). Here's the meat of the Publish.cmd I use in a .Net 4.0 project for instance -

%SYSTEMROOT%\Microsoft.NET\Framework\v4.0.30319\msbuild.exe master.build %*  /t:Publish %*

Very occasionally developers get some use out of them, but they are really for me when I'm initially setting up a build and troubleshooting build issues, build server/CI system issues or when I'm doing maintenance on a build.

-Kelly Schoenhofen

Saturday, November 27, 2010

Anatomy Of A Continuous Integration System Part Zero

Wikipedia puts the definition of software engineering's Continuous Integration (CI) simply as - "small pieces of effort, applied frequently", and that's pretty accurate. The various slices of what is packaged up these days as continuous integration  have been around for decades and re-invented every few years under various names and methodologies, but it's been a semi-recent branding effort to put these particular best practices together under the common name of Continuous Integration. 
Rather than be a flash-in-the-pan methodology, CI has become the underlying platform - the bedrock - that all the trendy methodologies have been built on for the last 10 years including Extreme Programming (XP), Agile, RUP, Scrum, etc. If your methodology isn't Cowboy Coding - and I mean the classic definition of cowboy coding - a lone developer or a handful of developers acting alone practicing anarchy for argument's sake: "your rules stifle me!" - I bet your methodology has most or all of the continuous integration basics at the bottom of it.

The industry book definition of CI has it defined as a handful of best practices - 
  • Using a source code repository system
  • Everyone gets latest (code) on a frequent basis
  • Frequent commits
  • Every source code commit generates a build*
  • Build automation
  • Build fast (and fail fast!)
  • Build results should be public
  • Automated unit tests in the build
  • Test environment should mimic Production environment
* To many people, this is all continuous integration is. It's important, and achieving that standard (every commit being built) means you practically have to be doing half of the other CI best practices. It's a good milestone to shoot for when you are converting to CI or building a CI system from scratch, but it's not the destination. Heck, I don't think achieving every facet and aspect of CI means you've reached your destination, CI is just a means - a tool - to make your software production more efficient and productive with the end result of higher quality at every output. 

All of these best practices can be compared to and validated against the boiled-down idea of "a small iteration - a small complete effort - increases the quality of a larger effort". If these pieces of effort don't pay huge dividends by the time your project is hitting its stride, let alone its delivery date, then simply remove that practice from your playbook. CI shouldn't be stifling or handcuffing the developer - everything about it is about improving the quality of the software being engineered, this is not about a Configuration Manager or System Administrator power tripping. 

For the next few blog posts, I want to just write up some basic articles on constructing & running a CI environment for a work group or entire shop developing in .Net but isn't using TFS* (Team Foundations System), and I'll be mainly focusing on the art of the msbuild file and how your entire CI architecture can and should be seen in the way you craft your msbuild scripts. 

* Maybe you don't use TFS because you're the lone group in your company doing .Net, and you can't get the budget for a TFS farm, or maybe you are required to use source control system XYZ for regulatory purposes and TFS isn't going to happen for purely technical reasons. 

None of the other posts are going to be as wordy or philosophical as this one - I promise they will be short and straight to the point. They are also all works in progress - they are the best practices I have today - I not only can't guarantee they won't evolve in a year from now, I certainly hope they will.

-Kelly Schoenhofen


Monday, July 12, 2010

Subversion Case Sensitivity & svnadmin pack

In my role administering Subversion, my most common request, usually from laptop users (which makes me wonder about a few things) is if I would magically go through our main Svn repo and change the entire folder structure to either upper case or lower case, instead of the mixed case we've built up over time. 
We exclusively use Svn on Windows here, so while our desktops have no problems with mixed case, the Svn/Apache engine has problems with it. And since we're almost all pedigreed Windows-from-birth IT people here, we instinctively mix case to make naming conventions 'look good', not worrying about the guy (or girl) in the next cube figuring out our super-cool-but-obscure capitalization scheme on our folder structures, because 'hey, it's Windows, it doesn't really matter'. 
It does matter to Subversion/Apache though, and the error messages from TortoiseSVN when you try committing changes to a folder you checked-out using the wrong case don't give you any indication that your commit hold-up is stemming from a single wrong-case character in the checked-out folder structure. 
So trying to give someone some relief, I looked at the Svn roadmap for the next two years to see if they were going to fix the semi-fickleness of case in Svn/Apache (either one way or the other, I don’t care, but this in-between sucks) and unless it’s embedded in the upgrade to httpv2 they are working on (1.7, this October), there is nothing in the works. In fact, I don't see anything exciting in the works, other than shelving and checkpointing tentatively planned in Svn 1.8 for Summer of 2011. 
I looked at the major accomplishments of the prior version, version 1.6 (the version we’re on), to compare what they have done to what is in 1.7 this fall and planned for 1.8 next year, and I saw a reference to the “pack” ability for large repos. While it’s partially meant to save space, its main purpose is to eliminate the redundant tiny files that build up as revision after revision of files go in and shard after shard is created to manage the repo. 
So, since we’re 100% upgraded to 1.6.x, I packed Production this morning. Space-wise we dropped ~135mb from 4.06gb, but file-wise we went from 148k files to 99k files. This should speed up disk I/O considerably, which is our current Svn speed bottleneck. 
Another engineer here may move the server to the SAN later this week (!!) which would be the next performance step. 



Thursday, May 13, 2010

com.atlassian.bamboo.repository.RepositoryException : Failed to checkout source code :(

While needing to do some stuck-build-cleanup the other week, I did an across-the-board upgrade of our core build & source control servers and clients.
We were scattered between a few versions of Atlassian Bamboo, TortoiseSVN, the CollabNet Subversion Command Line Client (for Windows), VisualSVN Server and VisualSVN Visual Studio Client, though all were Svn 1.6.x compatible and I thought it was a good time to get us brought up to a common, consistent & recent release level since I was going to be in the guts of a few of our servers anways.

With the Subversion project still working on the major 1.7 release, 1.6.x has pretty much ended at 1.6.11, and all our tools (for a change) have released 'final' versions for 1.6.11. I thought it was a good stopping point (since I was doing maintenance anyways) before 1.7 comes out, which we'll need to let mature for a few break fix cycles before we go to anyways, so I thought this would catch us up enough to tide us over for ~six months at most. (I had the same thought 6 months ago, thinking there was no way the Subversion project would take six more months before 1.7 came out).
I also threw on a point release upgrade of Atlassian Bamboo to catch us up on the last 3 months of minor break fixes, an upgrade which wasn't technically tied to Svn 1.6.11.

So I grabbed new versions of all the tools we use, all compatible for Subversion 1.6.11 -

  • TortoiseSVN 1.68 x86 & 64-bit Windows clients (for the XP & Windows 7 PC's, respectively)
  • VisualSVN 2.01 Client for Visual Studio 2008
  • CollabNet Subversion Windows Client 1.6.11
  • VisualSVN Server 2.1.2 (for Svn 1.6.11)
  • Atlassian Bamboo 2.5.5


I upgraded everything server-side, sent out emails to all the content creators and developers to upgrade their client tools, caught all the server boxes up on Windows Updates and everything got fresh reboots across the board. I disabled Basic Authentication on the Svn server and enabled NTLM Authentication only. Amazingly, it looked like everything still worked, and best of all nobody was getting authentication popups to retype their credentials anymore in TortoiseSVN  ;)

Fast-forward a week, until this morning; our secondary build server, while still showing totally green, is no longer building anything. I only noticed when a developer brought it to my attention that one of the products that is seto to build only on that server hasn't built in the 48 hours after his commit :(
I pick at it and finally figure out it's failing to successfully authenticate with the Svn server when I finally find this error:

Mainline - Product : Error occurred while executing the build for MAINLINE-PRODUCT-184
(com.atlassian.bamboo.repository.RepositoryException : Failed to checkout source code to revision '47527' for https://visualsvn/svn/Company/Mainline/Product)

I've seen similar errors in the past and it's normally a quick fix - somewhere in the maze of credentials that keep us compliant with the auditors, something expired or was cleared. Normally I just on the server in question, manually connect to the Svn repo using TortoiseSVN or CollabNet's client, re-type in credentials or re-accept an SSL cert that somehow got cleared, and done.
This time, everything was working fine already when I manually connected. I cleared all the cached credentials anyway, re-pulled down a small project to get the ball rolling, re-typed in credentials by hand, re-accepted the SSL certs, and re-kicked off a small build in Bamboo that only runs on the secondary server, to no avail. When I do it manually, everything works. When Bamboo does it, nothing works, it fails somewhere around the svn auth.

I decide that the Bamboo remote agent must need upgrading (even though a dusty note on Atlassian's site claims agents upgrade themselves, the local .jar & .exe files for the agent are still dated from 9 months ago, from the original ).
I try to upgrade the agent in place, give up on that when both (?) agents start fighting (!), remove the Bamboo agent completely from the secondary build server, remove it from the Bamboo administrative level completely, and reinstall from scratch on the secondary server. I have to re-setup the builders (msbuild & script), and go into each of the build projects that soley use the secondary server and re-select the build agent.

I tried running one of the failing builds (still green!), and still no joy, same errors as before around svn auth. I verified the remote agent was upgraded to Bamboo 2.5.5, reinstalled it as a windows service, played with the service account for abit, still not working.

I was able to pull a full stack trace out of the Bamboo agent logs, and started googling off bits of the full stack trace. I gleaned enough to go just to the Atlassian wiki/forum and search on bits and pieces of the error message.

Finally, I found this:
Authentication Failure With NTLM Subversion Authentication
http://confluence.atlassian.com/display/BAMKB/Authentication+Failure+With+NTLM+Subversion+Authentication

Grr, grr, grrr. It turns out the Java SVNKit that Bamboo uses is flaky as heck doing NTLM authentication. And if you remember,  I had turned off Basic Authentication and turned on NTLM Authentication 7 days prior, during the upgrade of VisualSVN Server :(
My real irritation is it's only partially failing - our primary build server is fine apparently doing NTLM - it's just our secondary build server that is flaking out.

My steps to rectify this closely follow the article:

  1. Set Basic Authentication to be the primary method of authentication in the Bamboo remote agent by adding this to the agent's config file and restarting the service:
    • wrapper.java.additional.3=-Dsvnkit.http.methods=Basic,Digest,Negotiate,NTLM
  2. Turn on Basic Authentication again in the VisualSVN Server setup and recycle the VisualSVN services.
  3. Clear cached Svn credentials on the secondary build server, and manually do a check-out and commit (a whitespace change) of a simple project using the CollabNet command line client, specifying the username and password of the service account the Bamboo remote agent runs as, to establish Basic Authentication credentials instead of an NTLM token. 
  4. Manually kick off the builds tied to the secondary build server in the Bamboo console. 

Fixed...

Initial upgrade cost: 2 hours.
Later troubleshooting: 4+ hours on my part, + developer time wasted waiting for their build.

Sigh :(

Thursday, May 6, 2010

Setting A Secure Host Header on IIS6/Server 2003

This is easily findable on the Google, but I thought I should include it here in case the prior post drew you in via a search engine. If you want to accomplish the same thing in IIS6, do the following (your AdminScripts folder mileage may vary, I'll use the default on a Windows 2003 server here):


High level example:
C:\Inetpub\AdminScripts>cscript.exe adsutil.vbs set /w3svc/<siteID)/SecureBindings "IP.Of.The.Site:443:Web.Site.Com"

Low Level Example:
C:\Inetpub\AdminScripts>cscript.exe adsutil.vbs set /w3svc/123456789/SecureBindings "127.1.2.3:443:WebServices.SecureCompany.com"

-Kelly

Thursday, April 29, 2010

How to set a host header on an SSL binding in IIS7

How to set a host header on an SSL binding in IIS7


If you google for this you'll see alot of answers, but none of them seemed production-ready or correct.
Situation: 

  • You have a Windows 2008 web server that is hosting multiple sites on different IP addresses. 
  • You want/need to return the site name rather than the box name in a WSDL call to one of your web services.
  • You may have multiple SSL certs installed on your server, for each site (i.e. VeriSign certs bound to the domain name of the site). 



Every example I looked was for some other situation, typically a developer's PC using self-generated SSL certs, and "*" is the IP address of the site, but we don't roll like that.
Also, they usually added a binding which works if you don't care which SSL cert you're using, but again, we don't roll like that. We want to edit an existing https binding so we know it's using the right SSL cert, not add a new binding. 



  1. Set up a plain-jane https binding on your site with the right ssl cert. Hostheader is probably grayed out, don't worry about it.
  2. Navigate to the C:\Windows\System32\inetsrv folder in a command window. 
  3. Run this command line - replace everything in #'s with your value:
appcmd set site /site.name: #SiteName# /bindings.[protocol='https',bindingInformation='#IPAddress#:443:'].bindingInformation:#IPAddress#:443:#HostHeader.YouWant.com#


example:
appcmd set site /site.name: Intranet /bindings.[protocol='https',bindingInformation='10.12.1.10:443:'].bindingInformation:10.12.1.10:443:Intranet.MyCompany.com


You may need to recycle IIS for the binding to correctly show up or work, but calling your service via https should now correctly return the host header rather than the server name!

Thursday, March 25, 2010

Computer Forensics: The Impact of Electronic Evidence on Modern Corporate Investigations

I attended a 2-hour lunch & learn hosted by the The Business Bank & RJ Ahmann Company at the Golden Valley Country Club, titled “Computer Forensics: The Impact of Electronic Evidence on Modern Corporate Investigations”. We ate while 3 speakers talked. Mark Lanterman was the last and primary speaker - I didn't catch the first two speaker's but they did excellent jobs. Mark is the CTO of a company called "Computer Forensic Services", a pretty high-end firm brought in by plaintiffs, defendants and the court system in general to figure out electronic evidence. His recent most high-profile cases have been Denny Hecker, Tom Petters and Paul McCartney's divorce. 

The initial speaker worked for The Business Bank and talked about banking/e-commerce fraud and what the e-commerce field is doing about it. 

Since you all love write-ups, here's some cleaned up notes and some observations of mine on the talks - some tentative best practices being put into place by the industry. Since we run an e-commerce platform ourselves, it behooves us to take a look at what the trends are, where our gaps are and maybe where we want to go ourselves. 

He primarily spoke regarding people who have admin accounts with their companies financial institution, or people with the role of orderers in an e-commerce platform.
 

1)    When a client’s admin account creates additional accounts, the account is held pending until a separate confirmation is made. They said it’s the hottest source of fraud right now – an admin’s account gets hacked, and the only thing the hackers do is create additional orderer accounts, and do their fraud from the fake orderer accounts. If it’s done properly, you never catch on that an admin’s account has been compromised (or which one), so they can keep doing it and doing it and doing it.
2)    As a result, have admin accounts password expiration be much faster than a normal user. They even referenced daily/every login expiration they have implemented. If the password changes every time the account it used, it makes the account harder to keep compromised and/or resell on the black market. It’s better to have less complex passwords (to keep users OK with 1-day expirations) then to have super-complex passwords that never expire.
3)    When activities of an orderer are outside the norm, insert challenge questions into the web page they have to answer before they can continue. If they can’t answer them, their account is locked until they call in.
4)    Geotracking (IP) restrictions on orderer/admin logins.
5)    Orderers/admins select a background watermark picture when their account is first created/logged into. This background watermark ensures they are in the “real” website when ordering/doing admin tasks.
6)    Issue RSA token fobs for admin & orderer accounts. It’s cheaper than a single fraud investigation. Usually you can charge the fob to the client and they are happy paying for it.

Corporate-wise:
1)    This may sound draconian, but no file attachments in (externally bound) emails. None. Email is not secure. Secure email is still not secure. No file attachments, no documents, period.
2)    If you need to share files, use an external service. They threw out “ShareDefender” as an example, for (casual) secure file transmission between you & clients.

Mark Lanterman gave the longest talk, and he mainly used "warstories" to illustrate where computer forensics have gone, and the role corporations play there.

1)    Deleting evidence highlights what needs to be looked at by investigators. You can’t find a needle in a haystack but you can often see where the needle was by the missing hole, and then you know where to start looking.
2)    In the last couple years, for the first time ever in Minnesota, plaintiffs have been sanctioned for “evidence spoliation”.
3)    Evidence collected for one court case has led to many other cases. I.e. two executives sue each other and the 3rd party court appointed forensic investigators uncover internal fraud from IT purchasing, etc.
4)    Give out rich/smart devices to employees; they are much easier to monitor and collect much richer evidence. They named the iPhone as top of the list as greatest un-intentioned evidence collector on employees.
5)    Even if the corporation didn’t do the theft, if an employee brings IN illegally obtained data/software and uses it, or forwards it on for other departments to use, the corporation can and will be help liable in a court of law. This happened in the Pioneer Press vs Star Tribune court case a couple years ago. You cannot let new employees bring in shady data, shady software or shady devices. You will be held liable for their use. The Star Tribune was fined & sanctioned for this, they also had to pay Pioneer Press’s complete court costs and all expert testimony fees.

That last point is highly relevant to your employee onboarding process. There is absolutely no allowance to bring in home laptops, home software, USB drives from home, nothing. You have to actively prevent it, not just say “hey guys don’t bring in your favorite software from home, or the client list from your last place of work”. They are literally exposing your entire company to successful litigation & sanctions.

-KellyS

Friday, February 5, 2010

Web.config inheritance madness!

We’re slowly migrating to Server 2008/IIS 7, and we hit on a curious issue the other day. We have a marketing "subsite” that’s incredibly basic – think a handful of .html and .asp pages -  in the same domain space as our flagship asp.net application, just taking up an isolated subfolder (and I actually use a virtual directory, because there’s no good reason to physically host one project inside another project, let alone an html/asp project inside a .net 3.5 project where you have two totally disparate groups making changes). Other than in Production nobody makes a habit of hitting the marketing site. So when a break-fix for one of the few pieces of functionality it has showed up, Development wasn’t able to pull up the marketing subsite in Test. They got this instead:


Server Error in '/Child' Application.




Could not load file or assembly 'Super.Awesome.v3.3, Version=3.3.2.0, Culture=neutral, PublicKeyToken=abc123youandme' or one of its dependencies. The system cannot find the file specified.

Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.
Exception Details: System.IO.FileNotFoundException: Could not load file or assembly 'Super.Awesome.v3.3, Version=3.3.2.0, Culture=neutral, PublicKeyToken=abc123youandme' or one of its dependencies. The system cannot find the file specified.


The assembly that whacked us is an http module used by the parent asp.net website, referenced in the parent folder’s web.config in httpModules and system.webServer modules. IIS 7 has some great inheritance features, but they never seem to work when you want them to (MSEntLib database connection strings) and work when you don’t want them to (referenced modules).

Googling around, there are a few solutions, but the one that seems to work best is the one I didn’t like at all, initially. They want you to modify the web.config of the parent application to stop inheritance. I’d rather modify the child application to block inheritance. MS’s method seems backwards to me, more of a security risk and a hinderance than a help. If you’re on a locked down server farm (say, rented web space at a 3rd party co-location facility), anyone with physical rights above your folder can make you inherit whatever you don’t explicitly call out. They could write a http, network or file stream logger that steals logins and passwords, for instance. Securing your site via https wouldn’t help because they would be on the inside of the protection. And you wouldn’t be able to do anything about it; you wouldn’t even know what you were inheriting.

Alright, soap box time over. Here’s the simplest way to block web.config section inheritances downstream:
Insert
<location path="." inheritInChildApplications="false">

and
</location>
Around the section you want to stop child inheritance for.

I used system.web & system.webServer in this example, as that’s where extension mapping modules seem to be loaded most commonly.

<?xml version="1.0"?>
[…]
  <configuration>

    <location path="." inheritInChildApplications="false">
      <system.web>
      […]
        <httpModules>
          <add type="SuperAwesome.HttpHandlerModule, Super.Awesome.v3.3, Version=3.3.2.0, Culture=neutral, PublicKeyToken=abc123youandme" name="SuperAwesomeHttpHandlerModule" />
        </httpModules>
      </system.web>
    </location>

  […]
    <location path="." inheritInChildApplications="false">
      <system.webServer>
        <modules>
          <add type="SuperAwesome.ASPxHttpHandlerModule, Super.Awesome.v3.3, Version=3.3.2.0, Culture=neutral, PublicKeyToken=abc123youandme" name="SuperAwesomeHttpHandlerModule" />
        </modules>
      </system.webServer>

    </location>
  </configuration>

Not the way I would have handled this (to repeat, I would have had the child web.config be able to block inheritance) but it could be worse.