Browsing the archives for the Software category.

# TDD is not about testing

Saw this comment on a StackExchange post:

“Testing can never prove the absence of bugs.  Just because your toasts [sic] pass does not mean you’re done.  Your tests can at best show that a very very very small subsample of statistical insignificance of the possible inputs to your program produce the outputs you think they should.  That’s not even close to proving the program is correct.  In any case, writing the tests does not help you architect [sic] solution that is extensible and maintainable going forward” – Old Pro

Wrong, wrong, wrong.

I repeat: TDD is not about testing.

A nice offshoot of TDD is that you have a growing body of regression tests that you can run after the fact; however -

The primary benefit of TDD, in my opinion, is exactly what the commenter above thinks DOESN’T happen: it helps you architect a solution that is extensible and maintainable.  I would say “design a solution”, because I think the lever of TDD meets the problem at the level of design, rather than architecture, and perhaps that’s where he and I disagree.  However, I think some practical tools in the TDD arsenal – I’m looking at you, spikes – also help at the traditional architecture level.

Why does TDD make better designs?

It forces you to know what success and failure look like for a given algorithm/problem/module/feature.

It encourages you to limit dependencies.

Similarly, it encourages you to use (and be aware of) collaboration patterns.

It discourages side effects.

It encourages you to craft your software in many ways that we all know are intuitively correct – short methods, loose coupling, single-purpose, small signatures, etc.

It encourages you to think like a client – a user of your software (not necessarily a human user, but they’re on the spectrum).

It encourages you to work in small, achievable chunks – no “big bang” integrations.

It doesn’t require you to be locked in a quiet room for hours while you dream up the ideal solution (which will probably be flawed, in many ways that you can’t predict).

# iTunes Is A Piece Of Shit

I have a problem.  A #FirstWorldProblem.  I can no longer sync my iPhone with my laptop via iTunes.  And I blame Apple.

I have a 16GB iPhone 4.  It’s upgraded to the lastest version of iOS, and shows 1.8 GB free space.

In iTunes, on my laptop, my music library takes up 6.81 GB of space.

iTunes reports that my phone has about 6.5 GB of audio on it:

This makes sense: most of my songs are the same on my phone as are on my laptop.  However, I’ve bought several albums recently, which makes my laptop library size larger.

When I try to sync:

KABLOWIE!

Are you kidding me?  I have 1.85 GB of free space on the phone, approximately the same songs on my phone as are on my laptop, and yet it takes over 6 GIGABYTES of free space to do a sync?

iTunes has always been a piece of shit – a speck in Apple’s eye, if you were.  If I weren’t so intimately familiar with how software can be (note I didn’t say should be) developed, I’d be more shocked that my situation is still happening.  But it’s apparent that nobody in the iTunes team ever said:

If a user wants to sync, and 95% of the songs and apps are the same, can they get away with having free space that represents about a quarter of their library size?

Duh.

# PowerShell Output When Called From a Batch File

Got this error in a PowerShell script today.  It was being called from a Windows batch file.

out-lineoutput : The OS handle’s position is not what FileStream expected. Do not use a handle simultaneously in one FileStream and in Win32 code or another FileStream. This may cause data loss.

There’s a post that explains what’s going on (from 2008!) and let me tell you my friends, the cause (and workaround) is gnarly.

This is me, pissed off that I can’t do something relatively simple from a user’s perspective (send all output to the same file).

Developers:

Think like a user.

Think like a user.

Think like a user.

I don’t give half a flying shit in a Wichita windstorm what sort of cool tech you have going on under the covers if you can’t solve my basic needs, such as, oh, I don’t know, run a scheduled task and get a log file created that includes both standard output and standard error.  In fact, I don’t give half a shit about STDIO at all, or the subtle differences between the various streams and the historical legacies of same.

Grr.

# Scripting Forefront TMG Load Balancer Draining

So this is kind of cool.  Part of my recent infrastructure work involved setting up a remote deployment system, which seemed to work pretty well.  There was a gotcha, though: the “money shot” in the web site portion was a simple cp (Copy-Item) from a staging folder to the “real” folder, which meant that, for the couple seconds that the copy was happening, as the worker process was restarting, that users could get weird errors, including 404s and 500s.  Not that probable, perhaps, but not good.

The next step was to take a look at our Microsoft Forefront TMG (Threat Management Gateway) servers.  We have two of them, a primary and secondary, to which an NLB (Network Load Balancer) directs traffic based on remote IP address.  Each TMG server, in turn, can direct incoming HTTP requests to one of multiple web servers in a web farm.

TMG has the concept of draining connections.  Draining, in this context, essentially means that you want to prevent new connections from hitting a particular server in a web farm.  This is exactly what we want, but in an automated fashion.

The idea this week was to try to script TMG to drain all connections to Box A before we actually deploy; then, after we confirm that we have a good deploy, do a “resume” of Box A – essentially telling TMG that the server is good to put back in to the rotation.

So, first of all, TMG is only scriptable via a COM interface.  Ugh.  I hazard that this is due to the ISA Server legacy behind TMG; but that, in 2012, a Microsoft product is not directly scriptable in PowerShell via cmdlets and managed code is sort of a pain in my rear.

Whatever. I can get around that, using:

PS> $obj = New-Object –ComObject <whatever> In this case, the <whatever> is FPC.Root, which is the root node in the TMG object hierarchy. From there you can drill down through .Arrays to .RuleElements to .ServerFarms to .PublishedServers and call the .DrainStopped property on the server in question, followed by .Save(). Interestingly, you can’t get where you want to go via the Policies route, which, if you look at the TMG Administration GUI, you would naturally think is the case. An even more interesting tidbit: when running PowerShell scripts interactively on the TMG servers, using RDP, the hierarchy reports slightly different information than when I use PS remoting to run a remote script on the TMG server. I noticed this because I started out using our secondary TMG server as the remote script host; but it was reporting back errors like this: 0xC00403A1 The property or method Save is not supported when the configuration is stored only on the local computer. When I ran the same code interactively on our secondary TMG server, it worked. And, when I changed the remote script host target from our secondary to our primary TMG server, that error went away. Weird. It’s like the object instantiation is context-sensitive and knows that if you are running on the primary TMG server, it can apply .Save() globally, or universally, if you will. At this point I have the deployment script draining and resuming connections, but our local TMG expert pointed out that there may be a lag in synchronizing the “Drained” status between the two TMG servers, so I need to add a little wait block in there that makes sure that after I call Save() on the Drain or Resume request, that we don’t proceed until both TMG servers know the status and stop accepting new requests for a given web server. Finding good information on how to script TMG on Google was plenty difficult. Finding information on how to script TMG using PowerShell remoting was even more difficult. I’m hoping that this post helps steer some people the right way. Comments Off # Continuous Deployment So my challenge this week (well, one of them) is to figure out a source control model that supports Continuous Deployment. Continuous Deployment is, of course, the operational model whereby changes to code get deployed automatically (or nearly automatically, for the risk-averse) several/dozens/hundreds of times a day to production environments, in response to a change in the source code or resources or configuration values or persistence structures. The simplest workflow is: 1. Developer commits a change to source 2. A continuous integration server picks up the change from the VCS, and runs one or more builds 3. The builds pass all the tests 4. The CI server or some other process picks up the build artifacts and pushes them to production. 5. Repeat! It gets a little tricky when trying to design the proper VCS workflow. In days past, I’ve worked in a slower environment that was very attentive to VCS workflows, and our process revolved around the so-called stable trunk paradigm: features were developed in branches, and then, during a period of several hours, some manual steps were run to merge tested changes back into the trunk, which was then branched to a “release branch”, from which the actual deployment artifacts were produced. During subsequent feature development, if any production bug were discovered, we’d fix it in the release branch, and merge changes back to trunk and then to all open feature branches. It worked well, but when faced with the goal of Continuous Deployment, stable trunk seems to be untenable. Automatically managing 3 open release branches per day, 2 of which would be rendered obsolete, seems like a pain in the ass. So, unstable trunk? Trunk always contains releasable code? That makes my gut churn a bit: • Release A • Developer “Hacky” commits changes A’, which fails some tests • While A’ is building and testing, a bug is discovered in A, which requires a fix A’’ to trunk. • We can’t release A’’ until Developer “Hacky” fixes her problems in A’. Well, we could, but the mechanism would be back to feature branches, where each feature is developed in a branch, and when done/tested/passed QA, merged back into the trunk. I still don’t know how to avoid that period of a few minutes where the merge has occurred, but the tests have not yet been confirmed as passing. If we need to fix a bug in production RIGHT NOW that seems like a weak spot, unless we introduce something like: • branch trunk T to T’ • merge feature branch into T’ • run tests on T’ • merge T’ back to trunk T but, if someone has committed to T in the meantime, what do you do? Run the whole process over again, I guess. I gets to a point where it’s turtles all the way down. One thing CD seems to eliminate from discussion is the promotion model, where a human is responsible for deciding when code gets promoted from staging to production. Or, if you have that model, it’s automatic and implicit in the deployment process. It also means – I think – that you should have feature branches for bigger features, whose code you don’t necessarily want to push to production until the entire feature is complete. I know there’s a whole sub-literature on the topic of introducing “hidden features” into production, which are either partially complete or outright broken, but I don’t like that. I found a couple articles on Continuous Deployment very worthwhile: A short non-technical article about Etsy’s CD setup, written by Fred Wilson The key takeaway from this was that we don’t roll back failures, we fix them. That’s interesting. The calm, dispassionate, conservative part of me quails slightly at the thought of not having the ability to rollback, but…maybe? Maybe? Eric Ries’s “Continuous Deployment in 5 Easy Steps” This one has an interesting dimension: stopping the line via a commit check script, e.g. if a build breaks, stop all new commits from happening. VCS-wide, no less. That seems a bit heavy-handed, but it probably results in a very disciplined approach from developers prior to committing anything. To the detriment of speed? Maybe. Hm. Timothy Fitz on IMVU’s Continuous Deployment process He talks a bit about a personal bugaboo of mine, the intermittently failing test. I HATE those, and I don’t use the word “hate” lightly. I have spent many man-months of my life figuring out how to write tests (and thus, code) that doesn’t mysteriously blow up in the presence of weird input or context, and I like that he (correctly) notes that as the CD process scales, you MUST address this issue or you will be a dead duck. Comments Off # What version of IIS are you running? Short answer – it’s tied to the version of the Windows OS you’re running. Slightly less-short answer: (gi hklm:\software\microsoft\inetstp).GetValue("VersionString") Comments Off # HOWTO: Count My Unit Tests in PowerShell Handy PowerShell snippet to see how many unit tests you have in C# files in a given directory. This matches NUnit and MSTest attributes. $p = "c:\path\to\my\solution"
$numTests = 0 gci$p -recurse -force -include @("*.cs") |% {
$c = cat$_.FullName
$numTests += ($c |? {
$_ -match "^([^//])*\[Test[^(Class|Initialize|Cleanup|Fixture|SetUp|TearDown)]" }).Length }$numTests

# Getting the Zip task in MSBuild Extension Pack to work

I wanted to use the MSBuild.ExtensionPack.Compression.Zip task in a new MSBuild-based deployment package I’ve been creating.  However, I got this message a couple times after installing the ExtensionPack:

C:\blah.build(63,9): error MSB4062:
The "MSBuild.ExtensionPack.Compression.Zip"
Files (x86)\MSBuild\ExtensionPack\4.0\MSBuild.ExtensionPack.dll.
Could not load file or assembly 'file:///C:\Program Files
(x86)\MSBuild\ExtensionPack\4.0\MSBuild.ExtensionPack.dll'
or one of its dependencies. The system cannot find the file
specified. Confirm that the <UsingTask> declaration is
correct, that the assembly and all its dependencies are
available, and that the task contains a public class
that implements Microsoft.Build.Framework.ITask.

I double- and triple-checked the install location, verified that I was using the <Import> element correctly, even went so far as to put a <UsingTask> element, but kept getting the error.

or one of its dependencies

…which I’ve seen approximately 3.54e12 times before.

This is a job for Assembly Bind Failure Logging.

So after turning on Assembly Bind Failure Logging, and re-running the build, I see the following error running FUSLOGVW:

*** Assembly Binder Log Entry  (4/3/2012 @ 10:10:21 AM) ***

The operation failed.
Bind result: hr = 0x80070002. The system cannot find the file specified.

Running under executable  c:\windows\microsoft.net\framework64\v4.0.30319\MSBuild.exe
--- A detailed error log follows.

=== Pre-bind state information ===
LOG: User = Anthony-14z\Anthony
LOG: Where-ref bind. Location = C:\Program Files (x86)\MSBuild\ExtensionPack\4.0\MSBuild.ExtensionPack.dll
LOG: Appbase = file:///c:/windows/microsoft.net/framework64/v4.0.30319/
LOG: Initial PrivatePath = NULL
LOG: Dynamic Base = NULL
LOG: Cache Base = NULL
LOG: AppName = MSBuild.exe
Calling assembly : (Unknown).
===
WRN: Native image will not be probed in LoadFrom context. Native image will only be probed in default load context, like with Assembly.Load().
LOG: Using application configuration file: c:\windows\microsoft.net\framework64\v4.0.30319\MSBuild.exe.Config
LOG: Using host configuration file:
LOG: Using machine configuration file from C:\Windows\Microsoft.NET\Framework64\v4.0.30319\config\machine.config.
LOG: All probing URLs attempted and failed.

Hm. Not much help, certainly no smoking gun.

But then I noticed in the help text the following line:

This task uses http://dotnetzip.codeplex.com v1.9.1.8 for compression.

Could that be the problem?  Well, probably not, because the Ionic.Zip.dll file is already installed in my MSBuild Extension Pack directory.  However, I decided to write little PowerShell script to make sure:

PS> $msbuildpackpath = "c:\program files\msbuild\extensionpack\4.0\msbuild.extensionpack.dll" PS>$a = [System.Reflection.Assembly]::LoadFile($msbuildpackpath) PS>$a.GetExportedTypes()

And here’s my output:

Exception calling "GetExportedTypes" with "0" argument(s): "Could not load file or assembly 'Ionic.Zip, Version=1.9.1.8, Culture=neutral, PublicKeyToken=edbe51ad942a3f5c' or one of its dependencies. The system cannot find the file specified." At line:1 char:20 $a.GetExportedTypes <<<< () + CategoryInfo : NotSpecified: (:) [], MethodInvocationException + FullyQualifiedErrorId : DotNetMethodException It’s trying to load Ionic.Zip.dll version 1.9.1.8. What’s the version I have? PS> cd “c:\Program Files\MSBuild\ExtensionPack\4.0” PS>$i =[System.Reflection.Assembly]::LoadFile(“$($pwd)\Ionic.Zip.dll”)
PS> $i.FullName Ionic.Zip, Version=1.9.1.8, Culture=neutral, PublicKeyToken=edbe51ad942a3f5c Well. Hm. At this point I’m thinking to myself, “Just GAC it”, which means put Ionic.Zip.dll into the GAC and call it good. But I don’t like that option. It sort of sidesteps the whole issue: why can’t MSBuild.ExtensionPack.dll find and load another DLL right in the same directory? Doing a little more testing, I wanted to create a little test .zip file before the bulk of my build – I figure throwing an early error if the <Zip> task doesn’t work will save some time and lend some clarity to the process. In creating this task, I used the <DateAndTime> task, also from the MSBuild Extension Pack, and got this error: C:\Users\Anthony\documents\EveryMove\hg\emcore\emsite\emsite.build(41,9): error MSB4062: The "MSBuild.ExtensionPack.Framework.DateAndTime" task could not be loaded from the assembly C:\Program Files (x86)\MSBuild\ExtensionPack\4.0\MSBuild.ExtensionPack.dll. Could not load file or assembly 'file:///C:\Program Files (x86)\MSBuild\ExtensionPack\4.0\MSBuild.ExtensionPack.dll' or one of its dependencies. The system cannot find the file specified. Confirm that the <UsingTask> declaration is correct, that the assembly and all its dependencies are available, and that the task contains a public class that implements Microsoft.Build.Framework.ITask. WTF? OK, I go back and make sure the original .zip file is Unblocked, which has bit me before, and repair the MSBuild Extension Pack install. Re-run. Still no dice. At this point I’m pulling my hair out. I’ve got deadlines to meet here! Uninstall MSBuild Extension Pack. Delete the downloads. Reboot. Re-download the install .zip file. Unblock. Unzip. Install. I re-run my little script in PowerShell to load the MSBuild.ExtensionPack.dll (using LoadFile), and get the same error. At the end, a sad little note: LOG: All probing URLs attempted and failed. Wow. My next thought is to add the MSBuild Extension Pack directory to the PATH. No dice. I double-check the obvious. I’m logged in as a local admin. The Ionic.Zip.dll file is there, in the same directory as MSBuild.ExtensionPack.dll. Figuring that there may be something with the bind context, I try just Load() instead of LoadFile(): [System.Reflection.Assembly]::Load("MSBuild.ExtensionPack, Version=4.0.0.0, Culture=neutral, PublicKeyToken=10d297e8e737fe34") No dice. I try to capture the .LoaderExceptions, which I really expect to be just like the Fusion Log Viewer log details (and am right): try {$p = "c:\program files\msbuild\extensionpack\4.0" 

       cd $p$m = [System.Reflection.Assembly]::LoadFile("$($pwd)\MSBuild.ExtensionPack.dll")       $m.GetTypes() } catch [System.Reflection.ReflectionTypeLoadException] { Write-Error$_.Exception       Write-Host "LoaderExceptions"       $_.Exception.LoaderExceptions |% { Write-Error$_.ToString()     } 

}

I’m going insane.  I know that the assembly is not found because the directory that MSBuildExtensionPack is installed in is not being probed by the CLR.

Is it a CAS thing?  I thought CAS was deprecated, but the Ionic.Zip.DLL assembly is built against .NET 2.0, so maybe there’s something there.  That wouldn’t explain the error with <DateAndTime>, however.   Plus, I’m getting File Not Found exceptions during probing; I would expect some security-related error in the Fusion log if it were CAS.

I don’t want to GAC it.  I don’t.  But I’m tired, and cranky, and want this little task to be done so I can move on.

PS C:\Program Files\MSBuild\ExtensionPack\4.0> $iz = "$($pwd)\Ionic.Zip.dll" PS C:\Program Files\MSBuild\ExtensionPack\4.0> gacutil /i$iz /f 

 Microsoft (R) .NET Global Assembly Cache Utility.  Version 3.5.30729.1 Copyright (c) Microsoft Corporation.  All rights reserved. 

Assembly successfully added to the cache

Gah.

There’s still one more problem: The line in the MSBuild.ExtensionPack.tasks file that says this:

<ExtensionTasksPath Condition="'$(ExtensionTasksPath)' == ''">$(MSBuildExtensionsPath)\ExtensionPack\4.0\</ExtensionTasksPath>…means that it’s looking in C:\Program Files (x86)\, which is not where my ExtensionPack is located.

Changing that one line to two:

<ExtensionTasksPath Condition="'$(ExtensionTasksPath)' == ''">$(MSBuildExtensionsPath64)\ExtensionPack\4.0\</ExtensionTasksPath> <ExtensionTasksPath Condition="'$(ExtensionTasksPath)' == ''">$(MSBuildExtensionsPath32)\ExtensionPack\4.0\</ExtensionTasksPath>…and my MSBuild script works.

Fucking hell.

# HOWTO: Get rid of the MS-DOS path warning in Cygwin

This one bugs me.  Every clean install of Cygwin means I have to deal with this:

PS C:\Users\Anthony\downloads> sha1sum .\Git-1.7.9-preview20120201.exe
cygwin warning:   MS-DOS style path detected: .\Git-1.7.9-preview20120201.exe   Preferred POSIX equivalent is: ./Git-1.7.9-preview20120201.exe   CYGWIN environment variable option "nodosfilewarning" turns off this warning.   Consult the user's guide for more details about POSIX paths:     http://cygwin.com/cygwin-ug-net/using.html#using-pathnames
\0627394709375140d1e54e923983d259a60f9d8e *.\\Git-1.7.9-preview20120201.exe
PS C:\Users\Anthony\downloads>

To get rid of the warning message, just add a new CYGWIN environment variable to your Windows box, and set the value to be “nodosfilewarning”:

Voila.  No more nag message.

# HOWTO: Find all file extensions in PowerShell

Here’s a handy little script to tell you all of the types of file extensions there are in a directory (and subdirectories), along with their counts:

$types = @{} gci . -force -recurse | where-object {$_.PsIsContainer -eq $false } | foreach-object {$ext = $_.Extension if ($types[$ext] -eq$null) {
$types[$ext] = 1
} else {
$types[$ext] = $types[$ext] + 1
}
}
\$types.GetEnumerator() | sort-object -Property Value -Descending

I need this to ensure my Zip task correctly swept up all the files we might need for a given deployment.