Early access to HttpRequest in ASP.NET application

This is I see question often asked by people working on legacy applications. When IIS Integrated application mode was introduced, there were couple of breaking changes that caused (badly designed) access to web request to crash.

Applications which used to access HttpRequest early, in Application_Start greet visitors with an exception:

Request is not available in this context

In simple words, request is not yet available in this stage of execution. It will be available later. Documentation for HttpContext.Request property states:

ASP.NET will throw an exception if you try to use this property when the HttpRequest object is not available. For example, this would be true in the Application_Start method of the Global.asax file, or in a method that is called from the Application_Start method. At that time no HTTP request has been created yet.

There are many suggestions that involve catching the exception, checking message or other properties and make a decision based on that. While this will work, it’s not good approach for number of reasons.

Most important of which, it will negatively impact performance of your application, which is explained in this MSDN article.

Checking message is not safe, since different language version of .NET framework may be used, with localized messages. Or the message can be changed in updated version of .NET framework.

In the end, determining that Request is not always enough. One may decide to allow this, if information in the request is not critical, but some (awful) code may require the request to properly perform the initialization.

Workaround is simple enough, that I’m amazed there are so many proposals for what I’ve described above.

  1. Move initialization from Application_Start to Application_BeginRequest.
  2. Since request handling method executes on each request, make sure it’s executed only once
public void Application_BeginRequest()
{
    DoStartup();
}

private static bool _startupPerformed = false;
private static object _startupLock = new object();
private void DoStartup()
{
    if (!_startupPerformed)
    {
        lock (_startupLock)
        {
            if (!_startupPerformed)
            {
                _startupPerformed = true;
                // Initialize here
            }
        }
    }
}

Creating network share with anonymous access

I needed to create a network share on Windows server machine which would require no authentication whatsoever from users. This post is intended to serve me as a reminder, since googling the solution every time eats easily away hours.

Settings which need to be changed of course depend on version of Windows of network share host. This post describes how to do it on a Windows 2012 R2.

Rougly what needs to be done is:

  • network share should be created
  • share permissions need to be set
  • security settings need to be changed

In more words:

  1. Share a folder by opening folder properties, navigating to Sharing tab and clicking
    Advanced Sharing…
    2015-03-10_18-34-08
  2. Enable sharing and click Permissions
    2015-03-10_18-34-35
  3. Add Everyone (should already be there), Guest and ANONYMOUS LOGON and give them Read access
    2015-03-10_18-35-07
  4. Open Group Policy Editor (hit Ctrl+R, type gpedit.msc and hit enter)
  5. Navigate to Computer Configuration → Windows Settings → Security Options
    2015-03-10_18-50-30
  6. Change following:
    • Accounts: Guest account status – change to Enabled
    • Network access: Let Everyone permissions apply to anonymous users – change to Enabled
    • Network access: Restrict anonymous access to Named Pipes and Shares – change to Disabled
    • Network access: Shares that can be accessed anonymously – enter name of share you created in the text field
      2015-03-10_18-49-23

This let me access the share \\<MachineName>\Share without providing any login information.

Running Windows 8.1? With how similar these two OSs seem, you’d expect this would be enough. However, it is not. For Windows 8.1, Microsoft recommends using Home groups. It is still possible to get conventional file share working, but I have not had time to try this out and it doesn’t seem a good security practice. I’ll just refer you to a find I stumbled upon on MS Technet Forums. Essentially what it suggests is using LanMan level 1 compatibility mode which would allow OS to accept LM authentication (in addition to NTLMv2). I’m not going to pretend to understand what kind of repercussions this has on machine security so I won’t recommend you to do it outside of your home LAN, and maybe not even there if it’s exposed over WiFi.

2015-03-11_13-07-44

Web server returning proper response with 500 status

Problem

I’ve had a funny problem today. It wasn’t so funny during hour time I was trying to solve it. Directing my browser to a web page which looked OK yesterday resulted in a horrific view of content without downloaded styles and smelled like missing script files.

Debugger has shown that indeed, some of the static files could not be downloaded. Status 500, server said. Internal server error, server said. Ok then, let’s see what this is about. So I open Response body and what do I see? I see proper response, from start to end.

2015-03-06_19-07-14

This happened to random static files.

Root cause

My bad…

I’ve placed some debugging code which occasionally failed in Global.asax.cs, Application_Start method. Code was such that it failed for random web request, and IIS was configured through web.config to let ASP.NET handle all requests, including static files. So, from standpoint of ASP.NET, web request has failed since an exception was thrown and it returned status 500 to IIS. However, it did not return any response body along with status, so IIS grabbed the file and sent it back.

 

Web fonts and IE on Windows 2012

I’ve had not a good day with web. It’s been throwing me curve balls whole day. One of things which wasted my time was Internet Explorer which is ran on a server OS.

I did not expect everything to go completely smooth, as it’s usual to have to go around IE enhanced security on server operating system if you wish to browse at all.

I’ve fired up IE’s debugger to see what’s going on, expecting that problem lies in MIME types configured in IE (or web.config file). However it turned out that IE did not request web fonts at all. There was no warning or notice in console either.

Iconless buttons
IE on server OS does not request web font unless site is trusted

Problem was that IE does not even request web fonts unless host is added to list of trusted sites. Run to IE options, Security tab, select Trusted sites and add target host to the list. Problem should be solved

2015-03-06_18-53-152015-03-06_18-56-08

 

Upgrading Windows 8.1 edition to Pro

The problem
I’ve recently got a new laptop with licensed version of Windows 8.1, however I needed a Pro version since I need Hyper-V. So… I downloaded “multiple edition” setup ISO from MSDN and repaved the machine. It did not ask for a product key and just got activated automatically as a standard edition. After searching around I found out you can change your product key (in a couple of ways) so I gave it a shot with my Pro key. But I was greeted with an error message.

That key can’t be used to activate this edition of Windows.

Googling and Binging around didn’t help as everyone pointed to using an “Upgrade key” which is a different thing. This doesn’t work for me since I’m MSDN subscriber. Why in hell would I then buy another key? I won’t budge… Go for another longer session of search…

The solution

Disable User Account Control

After disabling UAC changing product key worked like a charm. I’m amazed as anyone else…

EntityFramework 7 – Where to next?

With a track record of doing major strategy shifts, Microsoft wasn’t the company one would expect to play by rules that someone else wrote. After era of dictating trends, they’ve learned to listen and follow paths set by someone else. They’ve open sourced a lot of things they are working on. And not just research projects. Major frameworks or components used by millions of websites.

EntityFramework is another proof that they listen. They’ve heard how people want to do data access, and they’ve followed the way set by Hibernate project. They weren’t very successful in the begging. I’ve had a lot of rows with EF 3.5, it’s object tracking, database first design et cetera. After years of playing catch-up, finally with EF 5 we got a tool which we would dare use in a production project.

After tremendous effort they’ve made, and a lot of energy invested they are neck and neck with best O/RM solutions, and they have a chance to offer something more. Strategy for EF7 has been outlined for public in their Entity Framework Everywhere initiative.

What I’ve taken away from their post is:

  • It will be lightweight. Many seldom used features will not be there. Think, one-to-one mapping will remain, inheritance mappings wont. And surely few other things.
  • It will be available everywhere. Meaning, we will finally have a worthy heir to SQLite for local storage (think Windows Store, Windows Phone)
  • It won’t be just for relational data anymore. Azure Table Storage will be one of the target stores
  • EntityFramework 7 will be a different beast, and will potentially introduce noticeable amount of breaking changes to existing projects. Those of you used to DbContext need not worry as much
  • EntityFramework 6 will be developed and maintained in parallel

FeatherThose are tactical steps they are taking. What is the strategy behind them? I’m not so sure these steps stemmed from a strategy. It is the other way around. Microsoft has invested a lot of effort into making a good O/RM tool with premium experience for developers. Now that they are there, they still have developers who’ve invested years in EF and rather than shifting to maintenance mode and disbanding part of the team, they’ve decided they can afford to innovate. We don’t see clear strategy also because now ASP.NET team which is in charge of EF, is making  a tool for Windows Store and Windows Phone.

What Microsoft is doing is providing a lot of choice to developers. Rather than risking by making a strategy and missing, they are providing many choices and hoping us developers will take them the rest of the way. I’m enthusiastic about having lightweight local storage O/RM, and hope for better performance, but I hope that developers won’t get lost in forest of choices.

View manipulation in AngularJS applications

Occasionally everyone has bad luck to have to manipulate DOM imperatively, based on business logic. It’s good idea to keep this code away from the controller. Problem comes up when you need access to controllers scope in UI code as soon as it’s created.
There are few rules which need to be followed in order to get the job done AND keep clean separation of concerns:

  1. Get access to controller scope as soon as possible
  2. Don’t make changes to controller which introduce dependency on UI code (it would be possible to call global function from a controller to inform about the event)
  3. Don’t create any global variables

A way to do it (admittedly, ugly way) is to use ngIf directive.

I marked controllers element with an ID so I can find it’s scope later, and aliased controller name to capture controller reference in a variable. I’ve called it vm here, as in “view-model”.

<div ng-controller="MyController as vm" id="myView">

Then I added script element where I can place UI manipulation code and applied ngIf directive to it. Parameter of ngIf directive is the controller.

      <script type="text/javascript" ng-if="vm">
        var view = angular.element('#myView');
        if (view.hasClass('ng-scope')){
          $scope = view.scope();

          function viewManipulator($rootScope){
            alert('Controller has loaded');
            $scope.$watch('Name', function (newValue, oldValue){
              if (!newName)
                return;
              alert('Hello ' + newValue + '!');
            });
          }

          viewManipulator['$inject'] = ['$rootScope'];
          view.injector().invoke(viewManipulator);
        }
      </script>

viewManipulator will be called right after controller is loaded and therefor scope is created and alert will show up. There we can hook up watches over scope data.

Why does this work?
Key is in lines 1 and 3. ngIf directive manipulates DOM to remove and add elements depending on provided expression. The first time that browser loads the script, the script gets executed. Controller is not yet created at this point. For that reason we have line 3, which checks for magic AngularJS class. ng-scope class is a special class that AngularJS applies to elements which have their own scope created (for example controller elements). First time that line 3 executes, controller is not created and ng-scope class is not applied.

AngularJS continues initializing the controller. Before MyController controller is fully created, ngIf directive on our script element removes the script (because our controller does not exist yet). After creation of controller is finished, ngIf is re-evaluated, vm is available and script element is added to DOM again. Because script is added, it is executed again, but this time line 3 evaluates to true so our UI manipulation code can be executed this time.

I’ve injected $rootScope just to illustrate that we can also get dependency injection. It’s not needed in this example. $scope cannot be injected, so we are closing over variable in which we captured it. Other services can be injected.

You can see a demonstration of the hack in this plunk. Buttons modify model in controller code, but UI is changed from view code which is hooked up to controllers scope. I can already hear some of you say “But you can do this with a filter!”. Yes, I can do what I demonstrated with a filter, but this is just a simplified example. Actual scenarios are not always so simple. Sometimes calculations need to be performed and complex decisions need to be made when changing the UI.

Hope this is useful to someone, and that I didn’t break too many rules with this approach.

Creating screen scraper that works in background

Recently I needed to share information which I could effectively do only by capturing screen and uploading the screenshot to a public location. This kind of things cause my primitive laziness instinct to kick in and I start grasping for automated solution. I say primitive instinct because automation effort in most cases takes more time than performing the work by hand. But that is a topic for a longer post, so I’ll go back to my problem.

There are few tools which provide functionality of capturing part of a screen and uploading result to a hosting service, which is nice but doesn’t solve my problem completely. Additionally, I needed to do screen capturing at set time intervals, navigate to a different window and send out the link after image has been uploaded. Perfect, I thought! I’ll fire up Visual Studio and write something up! I won’t go into details of how to do all this, as it’s not technically interesting (I might publish source code in the future). Main problem which I initially didn’t think about was, how to do screen capturing of specific window without using a dedicated machine for this task. Using the tool in my user session is clearly not a solution since I don’t want to share random state of my screen, but specific windows at specific times.

Idea 1
Create Windows user account and keep it logged in in the background.
Failed because Windows doesn’t do any drawing once user session goes to background.

Idea 2
Log into Remote Desktop session and close the connection without logging out.
Failed for same reason as idea 1. Even if you ignore that you need Windows Server OS and appropriate license to have multiple RDP sessions at the same time, problem of idea 1 remains. This idea might have worked if I had access to a Windows Server newer than Windows 2000.

Idea 3
Host a virtual machine in Hyper-V.
Success! Worked like a charm in first attempt! Things to remember are that you must connect to the machine using Hyper-V VM Connection and not RDP, that Hyper-V video driver supports resolutions only up to 1600×1200. This solution seems obvious, but when you’re working on such seemingly simple problem, first solution that comes to mind isn’t to get a dedicated machine or sacrifice CPU and memory of your machine to host a VM. But thanks to Moores law I was able to accept that it’s cheaper for me to give up 512MB of RAM than to spend a week trying to come up with an elegant solution (and probably failing).

Caliburn.Micro contextual view woes or: XAML is not a purely declarative language

TLDR:
When using Caliburn.Micro in ViewModel first approach, and binding contextual content inside a view to same model (initial ViewModel), make sure to set View.Context before setting View.Model.

The long version:
This is not a criticism of Caliburn. It is really an excellent framework to speed up development, and make WPF a little less verbose for you. It did cost me quite a bit of time to get used to it and to understand it’s concepts. For same reason it’s easy to shoot yourself in the foot and find what the problem is about.

My foot wound was caused by this setup:

<ContentControl cal:View.Model="{Binding}" cal:View.Context="{Binding State}"></ContentControl>

View.Model and View.Context dependency properties provided by Caliburn.Micro. They provide us a way to host content inside a ContentControl, bind it to model provided by View.Model and select a view control depending on value of View.Context. There is seemingly nothing wrong with code above. At least for me, because I look at WPF as a declarative language. However, after running the application I get an exception:

TargetInvocationException wrapped around InvalidOperationException, stating:

“Logical tree depth exceeded while traversing the tree. This could indicate a cycle in the tree.”

at System.Windows.FrameworkElement.FindResourceInTree(FrameworkElement feStart, FrameworkContentElement fceStart, DependencyProperty dp, Object resourceKey, Object unlinkedParent, Boolean allowDeferredResourceReference, Boolean mustReturnDeferredResourceReference, DependencyObject boundaryElement, InheritanceBehavior&amp; inheritanceBehavior, Object&amp; source)
at System.Windows.FrameworkElement.FindResourceInternal(FrameworkElement fe, FrameworkContentElement fce, DependencyProperty dp, Object resourceKey, Object unlinkedParent, Boolean allowDeferredResourceReference, Boolean mustReturnDeferredResourceReference, DependencyObject boundaryElement, Boolean isImplicitStyleLookup, Object&amp; source)
at System.Windows.FrameworkElement.FindImplicitStyleResource(FrameworkElement fe, Object resourceKey, Object&amp; source)
at System.Windows.FrameworkElement.GetRawValue(DependencyProperty dp, PropertyMetadata metadata, EffectiveValueEntry&amp; entry)
at System.Windows.FrameworkElement.EvaluateBaseValueCore(DependencyProperty dp, PropertyMetadata metadata, EffectiveValueEntry&amp; newEntry)
at System.Windows.DependencyObject.EvaluateEffectiveValue(EntryIndex entryIndex, DependencyProperty dp, PropertyMetadata metadata, EffectiveValueEntry oldEntry, EffectiveValueEntry newEntry, OperationType operationType)
at System.Windows.DependencyObject.UpdateEffectiveValue(EntryIndex entryIndex, DependencyProperty dp, PropertyMetadata metadata, EffectiveValueEntry oldEntry, EffectiveValueEntry&amp; newEntry, Boolean coerceWithDeferredReference, Boolean coerceWithCurrentValue, OperationType operationType)
at System.Windows.DependencyObject.InvalidateProperty(DependencyProperty dp, Boolean preserveCurrentValue)
at System.Windows.FrameworkElement.UpdateStyleProperty()
at System.Windows.TreeWalkHelper.InvalidateOnTreeChange(FrameworkElement fe, FrameworkContentElement fce, DependencyObject parent, Boolean isAddOperation)
at System.Windows.FrameworkElement.ChangeLogicalParent(DependencyObject newParent)
at System.Windows.FrameworkElement.AddLogicalChild(Object child)
at System.Windows.Controls.ContentControl.OnContentChanged(Object oldContent, Object newContent)
at MahApps.Metro.Controls.TransitioningContentControl.OnContentChanged(Object oldContent, Object newContent)
at System.Windows.Controls.ContentControl.OnContentChanged(DependencyObject d, DependencyPropertyChangedEventArgs e)
at System.Windows.DependencyObject.OnPropertyChanged(DependencyPropertyChangedEventArgs e)
at System.Windows.FrameworkElement.OnPropertyChanged(DependencyPropertyChangedEventArgs e)
at System.Windows.DependencyObject.NotifyPropertyChange(DependencyPropertyChangedEventArgs args)
at System.Windows.DependencyObject.UpdateEffectiveValue(EntryIndex entryIndex, DependencyProperty dp, PropertyMetadata metadata, EffectiveValueEntry oldEntry, EffectiveValueEntry&amp; newEntry, Boolean coerceWithDeferredReference, Boolean coerceWithCurrentValue, OperationType operationType)
at System.Windows.DependencyObject.SetValueCommon(DependencyProperty dp, Object value, PropertyMetadata metadata, Boolean coerceWithDeferredReference, Boolean coerceWithCurrentValue, OperationType operationType, Boolean isInternal)
at System.Windows.DependencyObject.SetValue(DependencyProperty dp, Object value)
at System.Windows.Controls.ContentControl.set_Content(Object value)

Since in my mind WPF was a advanced, nicer, smarter, better HTML, I first went after the problem in my ViewModels and code. Little did I know, problem was in the one line from beginning of this post. Order of Caliburns dependency properties was to blame. Looking more carefully, I noticed that ContentControl was getting filled with same user control which hosted it. It turns out that, though bindings give impression of declarative coding they are dependant of each other. Since View.Model was set first, Caliburn started looking for appropriate View for the model provided. Since View.Context was configured later, it defaulted to main view control. Correct declaration would be following:

<ContentControl cal:View.Context="{Binding State}" cal:View.Model="{Binding}"></ContentControl>

Asynchronously deadlocked, or Do not wrap async methods into sync wrappers (HttpClient.GetAsync not returning)

DeadlockI’m currently working on a new project which for which I decided to try out some new libraries and on the way familiarize myself a little bit better with new C# async language extensions. Even though I’ve read about the extensions, and have basic understanding of .NET task parallel libraries (which are kind of base for async extensions), it didn’t take me long to get stuck.

Application consists of ASP.NET WebAPI application which provides some data to clients, WPF client application utilizing Caliburn.Micro for presentation and some auxiliary libraries. WPF application attempts to retrieve data form WebAPI client in order to authenticate application user.  This is done using HttpClient class and WebAPI extensions of it.

While sketching out application code, things worked fine. After trying to separate code into methods, threads started getting dead locked and UI started freezing. Nothing notable has changed. Code looked something like this:

private void btnShowFoo_Click(object sender, RoutedEventArgs e)
{
    string foo = PrettyFoo();
    ShowFoo(foo);
}

public string PrettyFoo()
{
    var foo = GetFooAsync().Result;
    GiveMakeOver(foo);
    return foo;
}

public static async Task<string> GetFooAsync()
{
    HttpClient client = new HttpClient();
    HttpResponseMessage response = await client.GetAsync("http://www.msdn.com");
    //    ...
}

Problem is not very easy to spot. btnShowFoo_Click calls PrettyFoo to get value which needs to be displayed. PrettyFoo in turn calls asynchronous method and waits for it result synchronously, which it then returns. This looked ok to me. It wasn’t new to me that compiled async code actually generates helper classes, callbacks and other helper things to prettify threading, but still I let this go without thinking what it actually does.

btnShowFoo_Click is executed in UI thread. When it calls PrettyFoo it blocks until called method completes and returns result. PrettyFoo asynchronously calls GetFooAsync and yields control of the thread. This is where things get interesting. Before UI thread is yielded, PrettyFoo creates continuation that will execute on UI thread, which will return result to btnShowFoo_Click. However, since btnShowFoo_Click is already blocked on UI thread, PrettyFoo‘s code which is supposed to return result waits for UI thread to be released forever. This causes UI to hang.

Since I probably didn’t do a good job explaining the problem, I suggest you read this super awesome article by Stephen Toub (Should I expose synchronous wrappers for asynchronous methods?).

Solution? It’s kind of obvious now, isn’t it? Do not wrap async methods into synchronous wrappers before thinking thrice. While we’re at that, don’t wrap sync methods into async wrappers. Stephen Toub has another excellent article on this too. If you’re thinking about using async C# language extensions, it’s a good idea to spend time to read and understand them. Seems I didn’t really understand them the first time.

How should my fixed code look like? Something like this:

private async void btnShowFoo_Click(object sender, RoutedEventArgs e)
{
    string foo = await PrettyFoo();
    ShowFoo(foo);
}

public async Task<string> PrettyFoo()
{
    var foo = await GetFooAsync();
    GiveMakeOver(foo);
    return foo;
}

public static async Task<string> GetFooAsync()
{
    HttpClient client = new HttpClient();
    HttpResponseMessage response = await client.GetAsync("http://www.msdn.com");
    //    ...
}