This post hardly matters now as Netflix is discontinuing the Windows Media Center Netflix App on September 15th 2015. Since reinstalling Windows and Media Center on my HTPC a few months ago I have been getting terrible vertical sync/screen tearing issues in the WMC Netflix App.
I tried several suggestions related to video drivers, Silverlight versions, etc. but none made any difference. I don't get tearing when watching videos, DVDs, Live TV or YouTube. The issue only occurs in the WMC Netflix app. It makes some videos unbearable.
My HTPC doesn't have the greatest specs so after installing I disabled extra services, removed wallpapers and turned off Aero. I was hoping this would free up memory and CPU and GPU time allowing WMC to perform better.
After the announcement that Windows 10 would not provide Media Center I have been looking for new DVR software. Yesterday I decided to MediaPortal a spin. During the installation it recommended enabling Aero to help avoid screen tearing. Well that immediately made me think of my horrific Netflix performance. I enabled Aero and, how about that, Netflix works perfectly now!
Dang It! It's been months and I never came across a suggestion that Aero could affect Silverlight's video rendering.
Ah well, at least I get to enjoy one final month of tear free WMC Netflix viewing!
Line by Line
Snippets, lines, and complaints. If you find a solution you are fortunate.
Wednesday, August 05, 2015
Sunday, December 21, 2014
iPhone security, App Switcher, and Snapshot Cache
In order to provide a smooth and fluid user experience on resource limited hardware IOS on mobile Apple devices takes a "snapshot" of the application's view anytime the application is suspended. This snapshot is then displayed when starting or switching back to the application. This gives the impression that the application starts up much faster than it actually does. This snapshot is also used as the application's thumbnail in the system's App Switcher view.
This is a great feature that we take for granted and rarely notice. It does however present a privacy and security risk. Apps that display private data can leak this data via these snapshots. Then an application, such as iFunBox, can be used to collect the snapshots and gather the leaked data; even non-Jailbroken phones are vulnerable.
This data leaking is preventable be individual applications but many have not secured against this. Some most notable applications that do not protect data are web browsers including Safari and Chrome even when in Private Browsing mode.
You can disable the system's ability to save these snapshots if you have a Jailbroken device. It is simply a matter of replacing the appropriate snapshot folders with a symbolic link to /dev/null.
The snapshots are stored in two different areas. There is a general snapshot folder and a snapshot folder for each application in its data folder.
Many application snapshots are stored in a general location at:
/var/mobile/Library/Caches/Snapshots
The rest are stored on an app by app basis in the application's data folder found at:
/var/mobile/Containers/Data/Application/<App GUID>/Library/Caches/Snapshots
Finding the application's Guid can be tricky but a good file manager, like Filza, can make this trivial.
Delete the Snapshots folder and create a symlink to /dev/null so the snapshots written to this folder are simply discarded.
The commands for creating the symbolic links can be found at http://www.zdziarski.com/blog/?p=140
This is a great feature that we take for granted and rarely notice. It does however present a privacy and security risk. Apps that display private data can leak this data via these snapshots. Then an application, such as iFunBox, can be used to collect the snapshots and gather the leaked data; even non-Jailbroken phones are vulnerable.
This data leaking is preventable be individual applications but many have not secured against this. Some most notable applications that do not protect data are web browsers including Safari and Chrome even when in Private Browsing mode.
You can disable the system's ability to save these snapshots if you have a Jailbroken device. It is simply a matter of replacing the appropriate snapshot folders with a symbolic link to /dev/null.
The snapshots are stored in two different areas. There is a general snapshot folder and a snapshot folder for each application in its data folder.
Many application snapshots are stored in a general location at:
/var/mobile/Library/Caches/Snapshots
The rest are stored on an app by app basis in the application's data folder found at:
/var/mobile/Containers/Data/Application/<App GUID>/Library/Caches/Snapshots
Finding the application's Guid can be tricky but a good file manager, like Filza, can make this trivial.
Delete the Snapshots folder and create a symlink to /dev/null so the snapshots written to this folder are simply discarded.
The commands for creating the symbolic links can be found at http://www.zdziarski.com/blog/?p=140
# rm -rf /var/mobile/Library/Caches/Snapshots
# ln -s /dev/null /var/mobile/Library/Caches/Snapshots
# ln -s /dev/null /var/mobile/Library/Caches/Snapshots
Modify these commands as needed for each application or use your file manager to create the symbol links.
Monday, October 14, 2013
HydraPaper update
I've been using HydraPaper at home and work for several months now and noticed a few minor bugs and missing features. I've made a small update that includes the following:
- Using the tray icon Exit menu now actually exits.
- Running out of one type of wallpaper doesn't reset the list for the other type. This bug made it so it didn't always cycle through every image before starting over again.
- HydraPaper now stops cycling the wallpaper when a remote desktop session begins.
- Orientation EXIF data in JPGs is now honored. All those sideways family vacation pictures are now automatically rotated when displayed (assuming your camera stores orientation data).
Download
The updated version: HydraPaper v1.1 from DropboxThursday, February 21, 2013
HydraPaper
Some friends of mine have begun writing about their programming exploits and I've found their experiences very interesting. Since I haven't written in quite awhile I thought posting a couple of little utilities that I've created would be fun.
The first step in the project was to find out how to programmatically set the wallpaper. This has already been done and I used the code library found here. It works well and has a simple interface.
I began by attempting to replicate the Windows Screen Resolution tool where each monitor is represented by a numbered graphic. I got this to work and you could set per screen settings. But after some testing I found all this was overkill. It was an interesting exercise but it wasn't necessary to have individual screen settings and so the fancy UI was pretty but unnecessary. So out it went and I simplified down to the current Folder and Resize method UI.
After further testing I decided I needed to support wallpapers that were already formatted for multiple monitors since some of my favorite wallpapers in my collection were in this format. So I cloned the single wallpaper settings UI to support these images as well.
To keep it simple I don't try to detect whether an image has the right proportions to be treated as a multi-monitor wallpaper. Instead I you have to divide up the images into separate folders.
The default folder selection dialog in windows uses a folder tree. I have always hated this dialog. You can't paste a path into the address or selection bar and have to drill in every time. I wanted the regular file selection dialog but for folders. This was solved by using the Ookii dialogs library. Awesome. UI is done.
The next step was working out how to resize images. I already had in mind to support a variety of resize options. These include touch outside (which resizes and crops to fit), touch inside (which resizes until the whole image fits leaving blank areas on the sides), center (does not resize, just centers), and stretch (which resizes to fit and does not preserve the image's aspect ratio). In the end I only use touch outside option but the work is done so I left the options.
Getting a list of all the screens and their resolution is very easy in C#. The System.Windows.Forms.Screen namespace provides access and data about each screen. I have access to the screen's width and height and the offset from the primary screen. The offset can have negative numbers if the screen is to the left or above the primary screen.
Since my image can't have negative X, Y coordinates (well, it turns that a C# graphics canvas can but it was too late when I found that out) I need to determine how much to offset my grid so I have room to draw images for the negative offset screens at positive image coordinates. This fixes it so my negative offset screens are translated to positive image coordinates but it moves the primary screen to no longer be drawn at (0,0). I'll have to take steps to correct the image later since the (0,0) coordinate anchors to the primary monitor's (0,0) coordinate rather than the top left monitor.
Now I resize, clip and draw each image to its offset position and I end with a very large wallpaper that looks like just what you'd want to see on your screens.
But due to the primary anchoring behavior I have to shift the image back to get (0,0) to line up with (0,0) again. To do this I create another large graphic. I take the image and clone from my offset primary position and draw it at (0,0) on the new graphic. Now my new graphic is positioned properly but I'm missing everything to the left and above the primary. So I take everything above the primary offset and draw it at the bottom of the new graphic. Then I take everything to the left of the primary offset and draw it at the right of the new graphic. The new chopped up image now anchors properly and will wrap to the negative offset screens.
The new graphic becomes my final wallpaper and is saved as a JPG and is set as the desktop's wallpaper. You can find the rendered wallpaper under your %LocalAppData%/HydraPaper folder. If your primary monitor is your top/left-most monitor you won't see any funny chopping. To get an idea of how the image wraps try adjusting the positions of your monitors in the Control Panel.
I avoid repeating wallpapers by flagging each file used until I run out of files and then I can start to reuse the files again. Before each wallpaper update I rescan the folders to check for new or missing files and update the internal list.
And that is it. It took me awhile to wrap my head around the math for properly scaling, clipping and positioning each part of the image but I got there in the end. My math really has gotten rusty since my school days.
I built the project on .Net 4.0 but there isn't anything special about .Net 4.0 used in the project except for some of the Linq-to-object methods called on some collections.
HydraPaper
My most recent project is called HydraPaper. It is a wallpaper rotator for those with multi-headed systems. Of course it will work for a single monitor just as well and has features that are useful even if you don't have two or more monitors.Features
- Resizes large or small wallpapers to fit based on your preference
- Creates a single large wallpaper from several individual wallpapers that will span all your monitors correctly based on your screen resolution and positions as setup in the Windows Control Panel
- Supports and resizes muli-monitor wallpapers. These are sized to properly span the single image across all your monitors.
- Automatically detects changes in your screen configuration and creates a new wallpaper immediately.
- Suspends when you lock your computer and resumes when you unlock it.
- Avoids repeating wallpapers until they have all been shown
- Runs quietly in the System Tray
Download
Download ZIP from DropBox. There is no installer. Check the readme.txt file. This project was build in C# using Visual Studio 2010 and so requires the .Net 4.0 Framework to run.Experience
I decided to work on this project as a respite from the tiring coding projects going on at work. I had this project in the back of my mind for awhile. I was growing tired of manually constructing wide screen wallpapers at the exact resolution I needed to get the Tile setting to get it to span multiple monitors. It got more difficult when I began using one monitor in portrait orientation rather than the standard landscape orientation. The manual process for creating a working multi-monitor wallpaper became the basis for the method I use in HydraPaper.The first step in the project was to find out how to programmatically set the wallpaper. This has already been done and I used the code library found here. It works well and has a simple interface.
I began by attempting to replicate the Windows Screen Resolution tool where each monitor is represented by a numbered graphic. I got this to work and you could set per screen settings. But after some testing I found all this was overkill. It was an interesting exercise but it wasn't necessary to have individual screen settings and so the fancy UI was pretty but unnecessary. So out it went and I simplified down to the current Folder and Resize method UI.
After further testing I decided I needed to support wallpapers that were already formatted for multiple monitors since some of my favorite wallpapers in my collection were in this format. So I cloned the single wallpaper settings UI to support these images as well.
To keep it simple I don't try to detect whether an image has the right proportions to be treated as a multi-monitor wallpaper. Instead I you have to divide up the images into separate folders.
The default folder selection dialog in windows uses a folder tree. I have always hated this dialog. You can't paste a path into the address or selection bar and have to drill in every time. I wanted the regular file selection dialog but for folders. This was solved by using the Ookii dialogs library. Awesome. UI is done.
The next step was working out how to resize images. I already had in mind to support a variety of resize options. These include touch outside (which resizes and crops to fit), touch inside (which resizes until the whole image fits leaving blank areas on the sides), center (does not resize, just centers), and stretch (which resizes to fit and does not preserve the image's aspect ratio). In the end I only use touch outside option but the work is done so I left the options.
Getting a list of all the screens and their resolution is very easy in C#. The System.Windows.Forms.Screen namespace provides access and data about each screen. I have access to the screen's width and height and the offset from the primary screen. The offset can have negative numbers if the screen is to the left or above the primary screen.
Since my image can't have negative X, Y coordinates (well, it turns that a C# graphics canvas can but it was too late when I found that out) I need to determine how much to offset my grid so I have room to draw images for the negative offset screens at positive image coordinates. This fixes it so my negative offset screens are translated to positive image coordinates but it moves the primary screen to no longer be drawn at (0,0). I'll have to take steps to correct the image later since the (0,0) coordinate anchors to the primary monitor's (0,0) coordinate rather than the top left monitor.
Now I resize, clip and draw each image to its offset position and I end with a very large wallpaper that looks like just what you'd want to see on your screens.
But due to the primary anchoring behavior I have to shift the image back to get (0,0) to line up with (0,0) again. To do this I create another large graphic. I take the image and clone from my offset primary position and draw it at (0,0) on the new graphic. Now my new graphic is positioned properly but I'm missing everything to the left and above the primary. So I take everything above the primary offset and draw it at the bottom of the new graphic. Then I take everything to the left of the primary offset and draw it at the right of the new graphic. The new chopped up image now anchors properly and will wrap to the negative offset screens.
The new graphic becomes my final wallpaper and is saved as a JPG and is set as the desktop's wallpaper. You can find the rendered wallpaper under your %LocalAppData%/HydraPaper folder. If your primary monitor is your top/left-most monitor you won't see any funny chopping. To get an idea of how the image wraps try adjusting the positions of your monitors in the Control Panel.
I avoid repeating wallpapers by flagging each file used until I run out of files and then I can start to reuse the files again. Before each wallpaper update I rescan the folders to check for new or missing files and update the internal list.
And that is it. It took me awhile to wrap my head around the math for properly scaling, clipping and positioning each part of the image but I got there in the end. My math really has gotten rusty since my school days.
I built the project on .Net 4.0 but there isn't anything special about .Net 4.0 used in the project except for some of the Linq-to-object methods called on some collections.
Thursday, May 27, 2010
Setting up a custom Team Build project
Our company has Team Server 2008 and Team Build agents all set up and ready to go, except that no one really knows what to do with it. We don't have unit tests and we don't do code analysis and our deployment process is quite manual. And we haven't really established what the Microsoft compatible method is for organizing our version control repository.
Microsoft products are very "in the box". By that I mean, if you are going to do what Microsoft envisioned you would do then everything is very easy (if you can figure out what it is they envisioned). As soon as you step outside the box you're in for a world of pain.
As with many Microsoft tools it seems that in order to understand the tool have to first understand the tool. So, with this post, I hope to help some other newbies get started.
So, to help ease the pain (but while realizing this is the blind leading the blind) here are some get started tips.
Our scenario was to simply pull the latest version a website from TFS and deploy it to a test web server. We wanted this to happen automatically when a check-in occurred. As simple as this sounds it is not "in the box".
The box is this:
All the configuration is handled inside Visual Studio which remotely configures the Build Server. If you are using TFS 2008 then you must use Visual Studio 2008 (with the Team Explorer component installed). If you are using Visual Studio 2010 with TFS 2008 you will not be able to "Manage Build Agents" which is a required piece.
Step 1: Set up the Build Agent (this all happens in Visual Studio)
Step 2: Build Definition
The build definition links together the source control path (so it knows what files to get), the build agent and the network share where it drops the output files. It also helps you create a basic TFSBuild.proj project file (it's just XML) where you customize what happens in the automated build.
In our environment we pretty much only use the Source Control portion of TFS (we don't use Work Items, Reports or Documentation). So I have mental blinders when I connect to the TFS server in Visual Studio (Team Explorer) and always immediately open the Source Control window without ever looking at the other items. One of those items is Builds. This is where you define your Build Definitions.
Right click on Builds and choose New Build Definition from the context menu. This gives you the Build Definition Dialog:
Step 3: Customizing the TFSBuild.proj
In order to make the Build Server do something other than the default we need to break open the TFSBuild.proj file. Without some kind of guidance this can be a nebulous void of XML. However, the project file is what is executed and little is hidden (even if nothing is obvious).
Here are some tips for dealing with this file:
It's path is: $(MSBuildExtensionsPath)\Microsoft\VisualStudio\TeamBuild\Microsoft.TeamFoundation.Build.targets
This translates to c:\Program Files\MSBuild\Microsoft\VisualStudio\TeamBuild\Microsoft.TeamFoundation.Build.targets in our environment.
Everything Team Build does is (pretty much) in that file. If you don't overwrite anything in your TFSBuild.proj file then you're looking at the code that will be executed.
There are still a lot of non-obvious things going on here. You'll see a lot of references to DesktopBuild. That is just there to throw you off. Also, since you're looking at XML and not a procedural language the execution order is not defined by the order items appear in the file.
A couple of items the differ in a TFS build vs a regular build are:
You can override EndToEndIteration to make your own fully customized build process but since you are working in a TFS build environment you'll probably still want to execute several of the existing dependecies. InitializeBuildProperties, for example, is important as it imports a bunch of TFS settings (like paths, TFS Source Control URLS, etc) into the build so you can work from those.
Also, some activities that you will want to execute are already available. So get familiar with what is there (e.g., the "get" target pulls down the files from source control).
Defining/Overriding variables
There are two kinds of variables in these build files. The most common are defined inside PropertyGroup tags. The other kinds are called Items and I'm not sure what the difference is.
You can define your own variables like so:
<propertygroup>
<myvariable>The Value</myvariable>
<myvariable2>The Value of Var #2</myvariable2>
</propertygroup>
Later you can dereference the variables using the $() syntax:
<propertygroup>
<myvariable3>$(MyVariable) of Var #3</myvariable3>
</propertygroup>
Defining/Overriding Targets
You define (or override existing) targets using the Target tag. Your own Targets need a unique name. Override an existing target by using the existing target name. Microsoft's pre-defined targets include several that are intended to be overridden.
<target name="MyTarget">
<message text="This is my target">
</target>
A Target can hold more variables (PropertyGroup) and call other actions. Above I call the Message action. There are lots of actions available. See MSDN or Google to get some lists.
In MS' default configuration most Targets have a before and after Target you can override to do whatever you need.
Greater Customization
In our project we didn't want to build anything. We just wanted some files copied out to a development server whenever someone checked in a change. This was too outside the box to do with the default configuration.
So I ended up with the following override of EndToEndIteration to do what I wanted:
<propertygroup>
<endtoenditerationdependson>
CheckSettingsForEndToEndIteration;
InitializeBuildProperties;
InitializeEndToEndIteration;
InitializeWorkspace;
Get;
DeployWebFiles;
Messages;
</endtoenditerationdependson>
</propertygroup>
<!-- Entry point: this target is invoked on the build machine by the build agent -->
<Target Name="EndToEndIteration"
DependsOnTargets="$(EndToEndIterationDependsOn)" />
You can see that it includes several of the initialization Targets but then skips to Get and then to my own Targets.
My targets look like this:
<propertygroup>
<deploywebfilesdependson>
</propertygroup>
<Target Name="DeployWebFiles"
DependsOnTargets="$(DeployWebFilesDependsOn)">
<BuildStep TeamFoundationServerUrl="$(TeamFoundationServerUrl)"
BuildUri="$(BuildUri)"
Name="DeployWebFiles"
Message="Deploying Web Files">
<output taskparameter="Id" propertyname="DeployWebFilesBuildStepID">
</buildstep>
<itemgroup>
<filestocopy include="$(SolutionRoot)\NET 1.1\Trunk\Source\Web\**\*">
</itemgroup>
<Copy
SourceFiles="@(FilesToCopy)"
DestinationFiles="@(FilesToCopy ->'$(DropLocation)\%(RecursiveDir)%(Filename)%(Extension)')"
SkipUnchangedFiles="true"
ContinueOnError="false" />
<BuildStep TeamFoundationServerUrl="$(TeamFoundationServerUrl)"
BuildUri="$(BuildUri)"
Id="$(DeployWebFilesBuildStepID)"
Status="Succeeded"
/>
<SetBuildProperties
TeamFoundationServerUrl="$(TeamFoundationServerUrl)"
BuildUri="$(BuildUri)"
CompilationStatus="Succeeded"
TestStatus="Succeeded" />
<onerror executetargets="PartialSuccess">
</target>
<target name="PartialSuccess">
<BuildStep TeamFoundationServerUrl="$(TeamFoundationServerUrl)"
BuildUri="$(BuildUri)"
Id="$(DeployWebFilesBuildStepID)"
Status="Failed"
/>
<SetBuildProperties
TeamFoundationServerUrl="$(TeamFoundationServerUrl)"
BuildUri="$(BuildUri)"
CompilationStatus="Failed"
TestStatus="Failed" />
</target>
I found examples of copying files through Google. I had to experiment a little to discover the file structure. You can browser the folders on your Build Agent system (remember step 1?).
Setting the right outputs
Visual Studio/TFS needs certain outputs in order to understand what is going on inside your custom targets.
I used the BuildStep action to display more steps that I can see inside Visual Studio's Build Explorer so I can watch the live progress of my build. TFS automatically does some of its own build steps but you can add as many as you want. You first create a new build step and accept its output of a Build Step ID (which you save in a variable). Then you can update that Build Step's status by passing the ID (using the $() syntax) in again.
The last tricky item is getting the build to report as successful. I finally got everything working with no errors or warnings but Visual Studio still reported the build as failed.
It looks like when you allow some of MS' default targets to run it somehow updates the build status (even though I can't find where). But, to handle this manually by calling the SetBuildProperties action.
We have to set the CompilationStatus and TestStatus to the string "Succeeded" in order to get TFS to believe that the build was successful. Again, here we are outside the box and even though we aren't doing a compilation or a test we have to report them as successful.
Drop Location
The last hiccup we had was with the Drop Location. All of MS' code turns the drop location from the one we specified in the Build Definition and adds a build path to it:
//myserver/myshare/BuildDefinitionName, Date.Number/
In our case we didn't want any of that. This meant we could either overwrite the DropBuild target or we could role our own Target. Since DropBuild is meant to deploy the OUTPUT of a build and not the source files I decided to role my own. Probably in the future we will use the DropBuild target and I didn't want to confuse other team members as to what the DropBuild functionality really is supposed to be.
The final hiccup that we haven't been able to resolve is that somewhere in the whole build process the Build folder is still generated in the Drop Location and the log file is placed inside it. I'm not sure if this is a function of the TFS Build service or one of the actions called by MS' build XML code. But it is really minor and we probably won't worry about it.
The End
I hope this helps get someone started. Unless you find a good book on the subject this initial hurdle is pretty difficult to get over. Most of the documentation/books I found were much more advanced and followed the original rule that in order to understand TFS Builds you have to first understand TFS builds.
Microsoft products are very "in the box". By that I mean, if you are going to do what Microsoft envisioned you would do then everything is very easy (if you can figure out what it is they envisioned). As soon as you step outside the box you're in for a world of pain.
As with many Microsoft tools it seems that in order to understand the tool have to first understand the tool. So, with this post, I hope to help some other newbies get started.
So, to help ease the pain (but while realizing this is the blind leading the blind) here are some get started tips.
Our scenario was to simply pull the latest version a website from TFS and deploy it to a test web server. We wanted this to happen automatically when a check-in occurred. As simple as this sounds it is not "in the box".
The box is this:
- Optionally get the latest code from TFS (with options for incremental get, cleaning, overwritting, etc).
- Optionally clean the build
- Optionally modify the default Drop location format
- Build
- Optionally Test
- Optional Code Analysis
- Semi-optionally drop the build to a network share
All the configuration is handled inside Visual Studio which remotely configures the Build Server. If you are using TFS 2008 then you must use Visual Studio 2008 (with the Team Explorer component installed). If you are using Visual Studio 2010 with TFS 2008 you will not be able to "Manage Build Agents" which is a required piece.
Step 1: Set up the Build Agent (this all happens in Visual Studio)
- Using the Build menu select Manage Build Agents
- Give any display name and description
- Enter the computer name where you installed the TFS Build Agent service
- Enter the port (it'll probably be the default)
- The working directory is the location out on the computer were the TFS Build Agent is installed where all the temporary files will be placed (source code, logs, build output, etc). This includes a TFS MSBuild variable thingy called $(BuildDefinitionPath). Leave this in your path so the files in this build stay separated from other builds. This variable is assigned the name of the Build Definition that you create later.
- They give you a bunch of warnings about having sufficient disk space (since in their scenarios builds take hours and a failed build is the worse thing possible).
- Set the Agent to Enabled.
Step 2: Build Definition
The build definition links together the source control path (so it knows what files to get), the build agent and the network share where it drops the output files. It also helps you create a basic TFSBuild.proj project file (it's just XML) where you customize what happens in the automated build.
In our environment we pretty much only use the Source Control portion of TFS (we don't use Work Items, Reports or Documentation). So I have mental blinders when I connect to the TFS server in Visual Studio (Team Explorer) and always immediately open the Source Control window without ever looking at the other items. One of those items is Builds. This is where you define your Build Definitions.
Right click on Builds and choose New Build Definition from the context menu. This gives you the Build Definition Dialog:
- Give the Build Definition a name
- Defined a workspace. My initial workspace was full of unrelated items. Delete everything you don't need as part of the build and add only those you need. Or, you can use the Copy Existing Workspace to pull in one that you are already using in Visual Studio
- Next create a project file. Project files are stored and executed from within TFS. So create a place in TFS to store the project files. We store ours in a separate location from our projects (a separate TeamBuild folder with a subfolder for each Build Definition). Use the Create... button to get a default TFSBuild.proj file created for you. Here you choose which solution to build. Choose your tests and code analysis. Our project didn't use any of these so you're one your own. Once you've finished the Create... wizard new TFSBuild.proj and TFSBuild.rsp files will be created in the location you specified. This TFSBuild.proj file is where you customize your build.
- Next choose your retention policy. The output from each is kept. During your initial setup you might want to keep everything so you can review the logs. You can go back and revisit what you want to keep after everything is finished. In the end you'll probably only want to keep Failed or Partially Succeeded builds so you can review the logs.
- Next choose the Build Agent you setup in the last step. You are also required to choose a network share where the build will be "dropped" (a place where the result will be copied to). For some reason it MUST be a network share. Also, the user you configured the Team Build Service to execute as must have Full Permission access to that share. Also, the build agent will automatically create a drop folder on the share you specify (so you can use one share for lots of different Build Definitions).
- Choose your desired trigger. During your initial setup you probably want to select "Check-ins do not trigger a new build". The amounts to "Manual Build". This way you can tweak your automated build and manually execute it rather than being tied to some other build trigger. Later you can return and select the most appropriate option.
Step 3: Customizing the TFSBuild.proj
In order to make the Build Server do something other than the default we need to break open the TFSBuild.proj file. Without some kind of guidance this can be a nebulous void of XML. However, the project file is what is executed and little is hidden (even if nothing is obvious).
Here are some tips for dealing with this file:
- There are some DO NOT EDIT things there and, probably, you can just ignore that.
- There are some Backwards Compatibility lines in there. And unless you are using old TFS stuff then you can delete that (so it's not in the way). Most of that legacy stuff is now defined in the Build Definition rather than the TFSBuild.proj file.
- Think of this file like a class file that is inheriting from another class. That parent class is Microsoft's build instructions (get from TFS, build, clean, test, drop, etc). You get all the default functionality and can override any part of it to do what you want
- Remember that when you override something you must still accept the same inputs and provide the proper outputs if you want everything to work right in the overall system (for example, there is a specific action that you have to take if you want to indicate that the build process was successful. If you don't it will report failed even if you didn't have any errors).
It's path is: $(MSBuildExtensionsPath)\Microsoft\VisualStudio\TeamBuild\Microsoft.TeamFoundation.Build.targets
This translates to c:\Program Files\MSBuild\Microsoft\VisualStudio\TeamBuild\Microsoft.TeamFoundation.Build.targets in our environment.
Everything Team Build does is (pretty much) in that file. If you don't overwrite anything in your TFSBuild.proj file then you're looking at the code that will be executed.
There are still a lot of non-obvious things going on here. You'll see a lot of references to DesktopBuild. That is just there to throw you off. Also, since you're looking at XML and not a procedural language the execution order is not defined by the order items appear in the file.
A couple of items the differ in a TFS build vs a regular build are:
- Entry Points
- Variables
You can override EndToEndIteration to make your own fully customized build process but since you are working in a TFS build environment you'll probably still want to execute several of the existing dependecies. InitializeBuildProperties, for example, is important as it imports a bunch of TFS settings (like paths, TFS Source Control URLS, etc) into the build so you can work from those.
Also, some activities that you will want to execute are already available. So get familiar with what is there (e.g., the "get" target pulls down the files from source control).
Defining/Overriding variables
There are two kinds of variables in these build files. The most common are defined inside PropertyGroup tags. The other kinds are called Items and I'm not sure what the difference is.
You can define your own variables like so:
<propertygroup>
<myvariable>The Value</myvariable>
<myvariable2>The Value of Var #2</myvariable2>
</propertygroup>
Later you can dereference the variables using the $() syntax:
<propertygroup>
<myvariable3>$(MyVariable) of Var #3</myvariable3>
</propertygroup>
Defining/Overriding Targets
You define (or override existing) targets using the Target tag. Your own Targets need a unique name. Override an existing target by using the existing target name. Microsoft's pre-defined targets include several that are intended to be overridden.
<target name="MyTarget">
<message text="This is my target">
</target>
A Target can hold more variables (PropertyGroup) and call other actions. Above I call the Message action. There are lots of actions available. See MSDN or Google to get some lists.
In MS' default configuration most Targets have a before and after Target you can override to do whatever you need.
Greater Customization
In our project we didn't want to build anything. We just wanted some files copied out to a development server whenever someone checked in a change. This was too outside the box to do with the default configuration.
So I ended up with the following override of EndToEndIteration to do what I wanted:
<propertygroup>
<endtoenditerationdependson>
CheckSettingsForEndToEndIteration;
InitializeBuildProperties;
InitializeEndToEndIteration;
InitializeWorkspace;
Get;
DeployWebFiles;
Messages;
</endtoenditerationdependson>
</propertygroup>
<!-- Entry point: this target is invoked on the build machine by the build agent -->
<Target Name="EndToEndIteration"
DependsOnTargets="$(EndToEndIterationDependsOn)" />
You can see that it includes several of the initialization Targets but then skips to Get and then to my own Targets.
My targets look like this:
<propertygroup>
<deploywebfilesdependson>
</propertygroup>
<Target Name="DeployWebFiles"
DependsOnTargets="$(DeployWebFilesDependsOn)">
<BuildStep TeamFoundationServerUrl="$(TeamFoundationServerUrl)"
BuildUri="$(BuildUri)"
Name="DeployWebFiles"
Message="Deploying Web Files">
<output taskparameter="Id" propertyname="DeployWebFilesBuildStepID">
</buildstep>
<itemgroup>
<filestocopy include="$(SolutionRoot)\NET 1.1\Trunk\Source\Web\**\*">
</itemgroup>
<Copy
SourceFiles="@(FilesToCopy)"
DestinationFiles="@(FilesToCopy ->'$(DropLocation)\%(RecursiveDir)%(Filename)%(Extension)')"
SkipUnchangedFiles="true"
ContinueOnError="false" />
<BuildStep TeamFoundationServerUrl="$(TeamFoundationServerUrl)"
BuildUri="$(BuildUri)"
Id="$(DeployWebFilesBuildStepID)"
Status="Succeeded"
/>
<SetBuildProperties
TeamFoundationServerUrl="$(TeamFoundationServerUrl)"
BuildUri="$(BuildUri)"
CompilationStatus="Succeeded"
TestStatus="Succeeded" />
<onerror executetargets="PartialSuccess">
</target>
<target name="PartialSuccess">
<BuildStep TeamFoundationServerUrl="$(TeamFoundationServerUrl)"
BuildUri="$(BuildUri)"
Id="$(DeployWebFilesBuildStepID)"
Status="Failed"
/>
<SetBuildProperties
TeamFoundationServerUrl="$(TeamFoundationServerUrl)"
BuildUri="$(BuildUri)"
CompilationStatus="Failed"
TestStatus="Failed" />
</target>
I found examples of copying files through Google. I had to experiment a little to discover the file structure. You can browser the folders on your Build Agent system (remember step 1?).
Setting the right outputs
Visual Studio/TFS needs certain outputs in order to understand what is going on inside your custom targets.
I used the BuildStep action to display more steps that I can see inside Visual Studio's Build Explorer so I can watch the live progress of my build. TFS automatically does some of its own build steps but you can add as many as you want. You first create a new build step and accept its output of a Build Step ID (which you save in a variable). Then you can update that Build Step's status by passing the ID (using the $() syntax) in again.
The last tricky item is getting the build to report as successful. I finally got everything working with no errors or warnings but Visual Studio still reported the build as failed.
It looks like when you allow some of MS' default targets to run it somehow updates the build status (even though I can't find where). But, to handle this manually by calling the SetBuildProperties action.
We have to set the CompilationStatus and TestStatus to the string "Succeeded" in order to get TFS to believe that the build was successful. Again, here we are outside the box and even though we aren't doing a compilation or a test we have to report them as successful.
Drop Location
The last hiccup we had was with the Drop Location. All of MS' code turns the drop location from the one we specified in the Build Definition and adds a build path to it:
//myserver/myshare/BuildDefinitionName, Date.Number/
In our case we didn't want any of that. This meant we could either overwrite the DropBuild target or we could role our own Target. Since DropBuild is meant to deploy the OUTPUT of a build and not the source files I decided to role my own. Probably in the future we will use the DropBuild target and I didn't want to confuse other team members as to what the DropBuild functionality really is supposed to be.
The final hiccup that we haven't been able to resolve is that somewhere in the whole build process the Build folder is still generated in the Drop Location and the log file is placed inside it. I'm not sure if this is a function of the TFS Build service or one of the actions called by MS' build XML code. But it is really minor and we probably won't worry about it.
The End
I hope this helps get someone started. Unless you find a good book on the subject this initial hurdle is pretty difficult to get over. Most of the documentation/books I found were much more advanced and followed the original rule that in order to understand TFS Builds you have to first understand TFS builds.
Team Build Server (2008)
The Backstory
We have grand schemes of checking in code, having it build, test, and deploy all magically. Then we can remove developer access from the web servers and prevent out of band changes.
Unfortunately, we are missing one piece: the guy; We need the guy who can make it all happen.
It should be simple. We are a Microsoft shop with Visual Studio 2010 and Team Found Server 2008. That supports the Team Build Server and it's all designed right in. But, since we don't have the guy we've made little progress in that direction.
But, we've found another reason to get team builds in place. Our content designers (who are in a separate department) also have access to the web servers and have become our "analog hole". We've implemented change control processes and they have not. So we are working to get them on board. The biggest problem is: they use Macs and don't use Visual Studio.
We were pleased when Microsoft announced Team Foundation Server 2010 and its acquiring of the Teamprise software. Our software license would now include our Mac users and they could begin checking in to TFS. This turns out to be a real pain for them.
They use their own test web server where they build their content. When it's ready they copy the files out to the production web server. Now, in the TFS process, they have to 1) check out the files they will change, 2) copy them out to their test web server, 3) make their changes, 4) copy them back to their TFS workspace, 5) check the files back in, 6) ask our deployment guy to deploy it to Pre-production, 7) make sure it looks good in Pre 8) ask our deployment guy to deploy it to Production.
So, although we are now all synchronized and have regained control over the changes made to production we've totally given the content design team the shaft. They need to be much more dynamic that what the new process allows for.
So, the compromise is to set up the build server to automatically deploy to Pre-production when they check in content changes. They will worry about streamlining the rest of their process.
So, I became the guy who gets to set up Team Build. See the next post for details of the nightmare.
We have grand schemes of checking in code, having it build, test, and deploy all magically. Then we can remove developer access from the web servers and prevent out of band changes.
Unfortunately, we are missing one piece: the guy; We need the guy who can make it all happen.
It should be simple. We are a Microsoft shop with Visual Studio 2010 and Team Found Server 2008. That supports the Team Build Server and it's all designed right in. But, since we don't have the guy we've made little progress in that direction.
But, we've found another reason to get team builds in place. Our content designers (who are in a separate department) also have access to the web servers and have become our "analog hole". We've implemented change control processes and they have not. So we are working to get them on board. The biggest problem is: they use Macs and don't use Visual Studio.
We were pleased when Microsoft announced Team Foundation Server 2010 and its acquiring of the Teamprise software. Our software license would now include our Mac users and they could begin checking in to TFS. This turns out to be a real pain for them.
They use their own test web server where they build their content. When it's ready they copy the files out to the production web server. Now, in the TFS process, they have to 1) check out the files they will change, 2) copy them out to their test web server, 3) make their changes, 4) copy them back to their TFS workspace, 5) check the files back in, 6) ask our deployment guy to deploy it to Pre-production, 7) make sure it looks good in Pre 8) ask our deployment guy to deploy it to Production.
So, although we are now all synchronized and have regained control over the changes made to production we've totally given the content design team the shaft. They need to be much more dynamic that what the new process allows for.
So, the compromise is to set up the build server to automatically deploy to Pre-production when they check in content changes. They will worry about streamlining the rest of their process.
So, I became the guy who gets to set up Team Build. See the next post for details of the nightmare.
Friday, January 29, 2010
What's really wrong with the iPad
Apple has announced their iPad. Besides the stupid name and sophomoric jokes most reviews don't seem to care for the device.
What I don't get is why they don't get what the iPad is. They review it wishing the whole time it was Netbook. They want it both ways and just don't seem to get it.
You can't have the extreme ease of the iPhone and the total flexibility of a general purpose computer.
What the iPad (iPhone, iPod Touch) has going for it is:
You don't need the Flash Player to experience the web. I can't believe they even argue this. The next generation browsers will be powerful enough to eliminate the need for the Flash Player in 90% of cases anyways, and mobile Safari is already on track there. It's not that far away. Any complaints along this vain really mean, "I can't watch Netflix or Hulu on the iPad." The XBox, PS3, and Wii don't have the Flash Player (at least not one worth speaking of) but they stream Netflix. If there is a market for it the iPhone/iPad will get it.
You don't need a physical keyboard to browse the web or to read eBooks. The iPhone has shown you don't need one to play games. Leaving out the keyboard keeps the form factor nice. And with multi-touch the virtual keyboard is usable.
You don't have to use eInk to be usable out doors. Besides, how much time do these reviewers really spend in full bright sunlight. You just need to turn up the brightness (more on this later).
You don't need a camera. This isn't a mobile device. It's a cordless, not a cell (phone). Perhaps a future version will have a forward facing web cam but I don't think that will fly with AT&T who seems to want you to pay for that data plan but not to use it.
You don't need a file system. This device is accessible to everyone. Mom won't lose her files because she saved them in Program Files instead of My Documents; She won't be downloading spyware or viruses; She'll always know where all her images are; She'll always know where her music and audiobooks are; She'll get the Mood Ring app and install it and run it and she'll have done it all by herself.
What the iPad missed:
Inter-application data sharing isn't limited to copy/paste. The iPad doesn't need a file system but it does need a way to email attachments from a variety of apps without each app having build out it's own "Email This" functionality. Any mp3 player I install should work with any music I've sync'd. Any ebook reader should be able to find any ebooks from any other ebook reader (among compatible formats). The Palm Pilot managed to do this more than 10 years ago. I think the iPad could swing it.
We want to print. I know the home printer world hasn't quite gone networking yet but, you're Apple, figure it out.
We want to share. I have an iPad, my wife likes her Kindle and the kid has an Android device. Make the ePub ebook format really open and let us read the books on any of our devices. Let iTunes sync with other devices even if it's only in a limited way. As much as you'd like it to be it's not an Apple world.
We need multi-tasking. Even if it's not what multi-tasking is on a Desktop. I need enough RAM (I'm looking at you iPod Touch 1g) and a UI for switching apps without exiting. Even if background apps are suspended. I want to switch between my eBook reader, email, web-browser and contact list without losing my place or waiting for the app to exit and start up again. If we can get it we want to full multitasking where that web page can keep loading while I switch over and finish composing that email.
We need a faster way to change the settings. If I do go out in the sunlight I need to be able to quickly adjust my brightness without exiting my eBook reader and going through 10 menus in the Settings app. Same goes for volume, networking and many of the other settings.
I want a mouse. The iPad supports a keyboard; We want some other peripherals. Let mice, headphones and whatever else work. This would be a boon for games and many other apps. In fact throw a CD/DVD burning device in there. And while we're at it where is the expandable storage. And internal microSD/SD card reader is a must. The Nintendo's Wii manages this and seems to handle piracy okay. You should follow suite.
I can handle the price. What I can't handle is that Amazon managed to get me free internet access for life and you didn't. I don't need another monthly bill. Convince AT&T to share my existing data plan with both my phone and iPad contributing toward my monthly limit. (Having the iPad is like having a separate electricity bill for each appliance in my house.) Give me some small amount for free and let me upgrade to the $20 or $30 plans for higher limits.
I hope the iPad finds it's niche. A lot of computing could go this closed route and I think it would be good for a lot of home users. Power users and businesses are always going to need the more open, powerful, and flexible systems we have today. But mom just can't handle it on her own. She needs an iPad (almost).
What I don't get is why they don't get what the iPad is. They review it wishing the whole time it was Netbook. They want it both ways and just don't seem to get it.
You can't have the extreme ease of the iPhone and the total flexibility of a general purpose computer.
What the iPad (iPhone, iPod Touch) has going for it is:
- Full size screen
- Multi-touch
- No file management (this is the bane of computer illiterates)
- Simple application management
- No booting
- One piece
- Multimedia
You don't need the Flash Player to experience the web. I can't believe they even argue this. The next generation browsers will be powerful enough to eliminate the need for the Flash Player in 90% of cases anyways, and mobile Safari is already on track there. It's not that far away. Any complaints along this vain really mean, "I can't watch Netflix or Hulu on the iPad." The XBox, PS3, and Wii don't have the Flash Player (at least not one worth speaking of) but they stream Netflix. If there is a market for it the iPhone/iPad will get it.
You don't need a physical keyboard to browse the web or to read eBooks. The iPhone has shown you don't need one to play games. Leaving out the keyboard keeps the form factor nice. And with multi-touch the virtual keyboard is usable.
You don't have to use eInk to be usable out doors. Besides, how much time do these reviewers really spend in full bright sunlight. You just need to turn up the brightness (more on this later).
You don't need a camera. This isn't a mobile device. It's a cordless, not a cell (phone). Perhaps a future version will have a forward facing web cam but I don't think that will fly with AT&T who seems to want you to pay for that data plan but not to use it.
You don't need a file system. This device is accessible to everyone. Mom won't lose her files because she saved them in Program Files instead of My Documents; She won't be downloading spyware or viruses; She'll always know where all her images are; She'll always know where her music and audiobooks are; She'll get the Mood Ring app and install it and run it and she'll have done it all by herself.
What the iPad missed:
Inter-application data sharing isn't limited to copy/paste. The iPad doesn't need a file system but it does need a way to email attachments from a variety of apps without each app having build out it's own "Email This" functionality. Any mp3 player I install should work with any music I've sync'd. Any ebook reader should be able to find any ebooks from any other ebook reader (among compatible formats). The Palm Pilot managed to do this more than 10 years ago. I think the iPad could swing it.
We want to print. I know the home printer world hasn't quite gone networking yet but, you're Apple, figure it out.
We want to share. I have an iPad, my wife likes her Kindle and the kid has an Android device. Make the ePub ebook format really open and let us read the books on any of our devices. Let iTunes sync with other devices even if it's only in a limited way. As much as you'd like it to be it's not an Apple world.
We need multi-tasking. Even if it's not what multi-tasking is on a Desktop. I need enough RAM (I'm looking at you iPod Touch 1g) and a UI for switching apps without exiting. Even if background apps are suspended. I want to switch between my eBook reader, email, web-browser and contact list without losing my place or waiting for the app to exit and start up again. If we can get it we want to full multitasking where that web page can keep loading while I switch over and finish composing that email.
We need a faster way to change the settings. If I do go out in the sunlight I need to be able to quickly adjust my brightness without exiting my eBook reader and going through 10 menus in the Settings app. Same goes for volume, networking and many of the other settings.
I want a mouse. The iPad supports a keyboard; We want some other peripherals. Let mice, headphones and whatever else work. This would be a boon for games and many other apps. In fact throw a CD/DVD burning device in there. And while we're at it where is the expandable storage. And internal microSD/SD card reader is a must. The Nintendo's Wii manages this and seems to handle piracy okay. You should follow suite.
I can handle the price. What I can't handle is that Amazon managed to get me free internet access for life and you didn't. I don't need another monthly bill. Convince AT&T to share my existing data plan with both my phone and iPad contributing toward my monthly limit. (Having the iPad is like having a separate electricity bill for each appliance in my house.) Give me some small amount for free and let me upgrade to the $20 or $30 plans for higher limits.
I hope the iPad finds it's niche. A lot of computing could go this closed route and I think it would be good for a lot of home users. Power users and businesses are always going to need the more open, powerful, and flexible systems we have today. But mom just can't handle it on her own. She needs an iPad (almost).
Subscribe to:
Posts (Atom)