Wednesday 19 December 2012

LinqPad + EF Code First: The type 'UserQuery+' was not mapped. Check that the type has not been explicitly excluded by using the Ignore method or NotMappedAttribute data annotation. Verify that the type was defined as a class, is not primitive, nested or generic, and does not inherit from EntityObject.

Take the following simple Entity Framework code:
public static void Main()
{
 var context = new MyContext();
 context.Database.Connection.ConnectionString = "Data Source=.;Integrated Security=SSPI;Initial Catalog=AndyTest;MultipleActiveResultSets=true";
}

public class Person
{
 [Key]
 public int Id { get; set; }
 public int Age { get; set; }
 [StringLength(50)]
 public int Name { get; set; }
}

public class MyContext : DbContext
{
 public DbSet Persons {get; set;}
}
If you run this in LinqPad, you get the error: The type 'UserQuery+Person' was not mapped. Check that the type has not been explicitly excluded by using the Ignore method or NotMappedAttribute data annotation. Verify that the type was defined as a class, is not primitive, nested or generic, and does not inherit from EntityObject. This is because all the code written in LinqPad is encapsulated within a UserQuery class - and Entity Framework will complain that the Person class is nested within LinqPad's UserQuery class. To allow this code to work within LinqPad, add the following line to the top of the LinqPad code:
#define NONEST
http://stackoverflow.com/questions/8730225/linqpad-ef-4-1-sqlce

Thursday 29 November 2012

Checking in from VS2008 to TFS2010

In order to check in from VS2008 to TFS2010, you need to install

Team Foundation Server Power Tools   October 2008
Power Tools that extend the Team Foundation Server integration with Visual Studio.

This enables custom check-in policies.

Context sensitive help in Endur

To get context sensitive help in Endur,

locate the help file (e.g. C:\OpenLink\helpdir\olf.chm).

Edit your start configuration, setting the environment variable:

SET AB_HELP_FILE=C:\OpenLink\helpdir\olf.chm

To set the variable immediately, go into System Wide Configuration.
Enter Help into the Search box. Click on AB_HELP_FILE.
In the right hand pane, select the username and enter the User Value (e.g. C:\OpenLink\helpdir\olf.chm).

Click Save.

Thursday 22 November 2012

T-SQL: Selecting the top n rows from a Group By


I have the following data


deal_tracking_num
tran_num
type_id
value
personnel_id
last_update
897693
897693
20065
Manual
20015
2012-11-22 10:59:38.240
897693
897693
20065
Non-STP
20015
2012-11-22 11:09:09.190
897695
897695
20065
Manual
20015
2012-11-22 10:58:58.063
897695
897702
20065
Manual
20015
2012-11-22 10:59:23.493
897698
897698
20065
STP
20015
2012-11-22 10:16:23.903
897698
897698
20065
eCM
20015
2012-11-22 10:18:38.587
897698
897701
20065
eCM
20015
2012-11-22 10:21:07.350
897698
897703
20065
eCM
20015
2012-11-22 11:22:25.377

To select the first row for each deal_tracking_num, use the following query:

select a.deal_tracking_num, a.value from 
(
     SELECT ab.deal_tracking_num, ati.*, ROW_NUMBER() over (partition by deal_tracking_num ORDER BY ab.deal_tracking_num, ati.last_update) as rank
     FROM ab_tran_info_history ati JOIN ab_tran ab ON ab.tran_num = ati.tran_num
     WHERE ati.type_id = 20065 -- Trade Confirmation Type
) a
where RANK = 1

This returns

897693
Manual
897695
Manual
897698
STP

Wednesday 21 November 2012

Log4Net - writing to event log with an impersonated user

If an ASP.NET application impersonates a user then log4net may fail to write to the event log.

One way around this is to permission the Event Log so that all Authenticated Users can write to it.

This is described here:
http://support.microsoft.com/kb/2028427

and
http://jpadda.wordpress.com/2010/08/08/event-log-write-permissions/



1.       Determine the Primary Domain Controller by executing the following command in an elevated command prompt:

NETDOM QUERY /D:<mydomain>PDC

2.       Create a VBS script with the following code:

 

strComputer = "localhost"

Set objWMIService = GetObject("winmgmts:\\" & strComputer & "\root\cimv2")

 

Set objAccount = objWMIService.Get _

    ("Win32_UserAccount.Name='<username>',Domain='<mydomain>'")

Wscript.Echo objAccount.SID

 

3.       Run the code in an elevated command prompt and obtain the user’s SSID:

cscript <code.vbs>

4.  Dump the existing Application Event Log security to a file
wevtutil gl application > C:\temp\security.txt

5.       Grant authenticated users Edit the line containing the channelAccess string, appending the following (A;;0×3;;;AU).

6.       Apply the updated setting, e.g.

wevtutil sl Application /ca:O:BAG:SYD:(A;;0xf0007;;;SY)(A;;0×7;;;BA)(A;;0×7;;;SO)(A;;0×3;;;IU)(A;;0×3;;;SU)(A;;0×3;;;S-1-5-3)(A;;0×3;;;S-1-5-33)(A;;0×1;;;S-1-5-32-573)(A;;0×3;;;AU)

Tuesday 20 November 2012

Copying data from one database to another

Using SSIS is a good way of bulk copying data from one database to another.

You can use a wizard within SQL Server Management Studio to create the SSIS package. Right click on the server and click Tasks > Export Data. At the end you have the option to create an SSIS package and either save it directly into a SSIS server or as a file.

There are many ways to run the SSIS package. For developers it is probably simpler to create a SSIS project and add the created package into it. You can then run the package from within Visual Studio.

Thursday 15 November 2012

Cleaning up the WinSXS folder on Windows 2008 R2

This article describes how to clean up the WinSXS folder on Windows 2008 Server.

Basically, you open an elevated command prompt and enter
DISM.exe /online /Cleanup-Image /spsuperseded

Tuesday 13 November 2012

.NET 4.0 / 4.5 serialization - System.Runtime.Serialization.InvalidDataContractException: No set method for property x in type y

Last night I observed a difference in the handling of WCF service contracts between .NET 4.0 and 4.5.

The service has an interface that allows inherited types to be passed over it.

[ServiceKnownType("GetKnownProductTypes", typeof(KnownTypeHelper))]
public interface IDealBookingService
{
  [OperationContract]
  int SaveProduct(ProductBase product);
}
 
One of these derived types, CommStorProduct, had a property that did not have a set accessor:

[DataMember]
public DateTime StartDate3
{
get { return StartDate; }
}

Running this on a developer’s machine, no problems were encountered.
However, deploying this to a downstream environment, the following error was thrown when the service started:

Unhandled Exception: System.Runtime.Serialization.InvalidDataContractException:

No set method for property 'StartDate3' in type 'ServiceContract.MyDerivedType'.

   at System.Runtime.Serialization.DataContract.DataContractCriticalHelper.Throw

InvalidDataContractException(String message, Type type)

   at System.Runtime.Serialization.ClassDataContract.ClassDataContractCriticalHe

lper.ImportDataMembers()

   at System.Runtime.Serialization.ClassDataContract.ClassDataContractCriticalHe

lper..ctor(Type type)

   at System.Runtime.Serialization.DataContract.DataContractCriticalHelper.Creat

eDataContract(Int32 id, RuntimeTypeHandle typeHandle, Type type)

   at System.Runtime.Serialization.DataContract.DataContractCriticalHelper.GetDa

taContractSkipValidation(Int32 id, RuntimeTypeHandle typeHandle, Type type)

   at System.Runtime.Serialization.XsdDataContractExporter.GetSchemaTypeName(Typ

e type)

   at System.ServiceModel.Dispatcher.DataContractSerializerOperationFormatter.Va

lidateDataContractType(Type type)

   at System.ServiceModel.Dispatcher.DataContractSerializerOperationFormatter..c

tor(OperationDescription description, DataContractFormatAttribute dataContractFo

rmatAttribute, DataContractSerializerOperationBehavior serializerFactory)

   at System.ServiceModel.Description.DataContractSerializerOperationBehavior.Ge

tFormatter(OperationDescription operation, Boolean& formatRequest, Boolean& form

atReply, Boolean isProxy)

   at System.ServiceModel.Description.DataContractSerializerOperationBehavior.Sy

stem.ServiceModel.Description.IOperationBehavior.ApplyDispatchBehavior(Operation

Description description, DispatchOperation dispatch)

   at System.ServiceModel.Description.DispatcherBuilder.BindOperations(ContractD

escription contract, ClientRuntime proxy, DispatchRuntime dispatch)

   at System.ServiceModel.Description.DispatcherBuilder.InitializeServiceHost(Se

rviceDescription description, ServiceHostBase serviceHost)

   at System.ServiceModel.ServiceHostBase.InitializeRuntime()

   at System.ServiceModel.ServiceHostBase.OnBeginOpen()

   at System.ServiceModel.ServiceHostBase.OnOpen(TimeSpan timeout)

   at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout)

   at Service.Host.ServiceController.Start() in C:\Source Code\Test\TestWCF\Serv

ice.Host\ServiceController.cs:line 14

   at Service.Host.Program.Main(String[] args) in C:\Source Code\Test\TestWCF\Se

rvice.Host\Program.cs:line 13

 

The developer’s machine had .NET 4.5 installed. After installing .NET 4.5 to the target machine the problem was resolved. (The other code-based solution was to add an internal set property to the data class as the exception hinted).

Inspecting the .NET code, it can be seen there is a runtime serialization difference between the two versions - .NET 4.5 is more forgiving than the earlier version for this scenario.

 
I haven’t found any documentation about serialization changes between the versions.

Thursday 8 November 2012

SyntaxHighlighter is a very flexible code beautifier, suitable for styling code and SQL on the Web.

SELECT * versus SELECT Col1, Col2 and Query Plans

Yesterday I deployed a colleagues project; a Website and a SQL Server Database to my local machine. I hadn't seen the code before but I'd seen the site in action. The Website used Entity Framework to connect to the underlying database.

When I first ran the Web page today there was no data in the database. So I then ran an SSIS package to populate the database.

Then when I reselected the Web page it timed out when executing the SQL query. Odd, I thought, so I looked at the underlying query. Entity Framework was issuing:

SELECT 
1 AS [C1], 
[GroupBy1].[K2] AS [IntraDayIntLegalEntityName], 
[GroupBy1].[K1] AS [IntraDayIntPortfolioName], 
[GroupBy1].[K3] AS [BaseCurrency], 
[GroupBy1].[A1] AS [C2], 
[GroupBy1].[A2] AS [C3]
FROM ( SELECT 
 [Extent1].[K1] AS [K1], 
 [Extent1].[K2] AS [K2], 
 [Extent1].[K3] AS [K3], 
 SUM([Extent1].[A1]) AS [A1], 
 SUM([Extent1].[A2]) AS [A2]
 FROM ( SELECT 
  [Extent1].[IntraDayIntPortfolioName] AS [K1], 
  [Extent1].[IntraDayIntLegalEntityName] AS [K2], 
  [Extent1].[BaseCurrency] AS [K3], 
  [Extent1].[IntraDayBasePayment] - [Extent1].[YestBasePayment] AS [A1], 
  [Extent1].[IntraDayBasePayment] - [Extent1].[YearEndBasePayment] AS [A2]
  FROM [FxR].[RawDataView] AS [Extent1]
 )  AS [Extent1]
 GROUP BY [K1], [K2], [K3]
)  AS [GroupBy1]

So I ran this query and it took ages. Longer than 15 minutes!

So I started to debug this. This query selected a view which itself did several left joins on a view.
But even more bizarrely, if I did

SELECT * FROM [FxR].[RawDataView] AS [Extent1]

Then the query took 10 seconds. If I re-ran
SELECT [IntraDayIntLegalEntityName] FROM [FxR].[RawDataView] AS [Extent1]

It took longer than 15 minutes! Hold on, why should SELECT * be quicker than Select ?

Time to debug! Enter the query plan. It was clear that the query plans were different for the two queries. Looking through the query plan (right to left, top to bottom) I could see that the faster query seemed to only pass 3000 rows early on, whereas the slow query passed 100,000 rows several times. This helped explain the slowness.


Also running the command

SET STATISTICS IO ON

Showed the following information:

Table 'TradeData'. Scan count xx, logical reads xxxxx.

The scan count and logical reads were vastly higher for the slow query.

Then, finally, I spotted the clue that led to the solution.
You can see from the screen that the estimated number of rows was vastly different from the actual number of rows. Clearly the query plan was expecting far fewer rows than it got. It was obviously out of date.

Running the query showed when the statistics were last updated:

SELECT
t.name AS Table_Name
,i.name AS Index_Name
,i.type_desc AS Index_Type
,STATS_DATE(i.object_id,i.index_id) AS Date_Updated
FROM
sys.indexes i JOIN
sys.tables t ON t.object_id = i.object_id JOIN
sys.partitions sp ON i.object_id = sp.object_id
WHERE
i.type > 0 and sp.rows > 0
-- AND i.name like '%RawDataView%'
ORDER BY
STATS_DATE(i.object_id,i.index_id) DESC,
t.name ASC
,i.type_desc ASC
,i.name ASC 

And this indicated the statistics were updated yesterday. SQL Server had determined the query plan based upon yesterday's data - when there was none there! After adding a 100,000 rows today the query plan was hopelessly out of date and not applicable.

The solution? I ran

UPDATE STATISTICS Fxr.TradeData WITH FULLSCAN

on each table and suddenly my query went down from 15 minutes to 10 seconds. The morale of the story? Learn to read query plans, and update your statistics if your data changes significantly!

Wednesday 24 October 2012

Visual Studio 2010 Test Project - Typemock Isolator Add-In

I've spent 6 hours wondering why running a unit test within Visual Studio 2010 started failing. It appears to work OK when the setting host is configured to run in a 64 bit process on a 64 bit machine - but set it to force tests to run in a 32 bit process and the test fails.

The tests fail on VS2010. I have a simple unit test project with a single test that reads Assert.IsTrue(true);

Visual Studio says "Test Run Error" and "Test Host Process exited unexpectedly" and "Failed to queue test run <blah>: Unable to start the agent process"

The event viewer logs:

AgentProcess cannot register with AgentService, make sure service is running.
(QTAgent32.exe, PID 5828, Thread 1) AgentProcess: could not register with agent service: System.InvalidProgramException: Common Language Runtime detected an invalid program.
at System.Uri.ParseScheme(String uriString, Flags& flags, UriParser& syntax)
at System.Uri.CreateThis(String uri, Boolean dontEscape, UriKind uriKind)
at System.Uri..ctor(String uriString)
at Microsoft.VisualStudio.TestTools.Execution.ExecutionUtilities.GetChannelUri(IChannelReceiver receiver)
at Microsoft.VisualStudio.TestTools.Execution.AgentContext.AddAgentProcessChannelUri(IChannelReceiver receiver)
at Microsoft.VisualStudio.TestTools.Agent.AgentProcess.Run()

I configured Agent32.exe.config to log:

<system.diagnostics>
    <switches>
      <!-- You must use integral values for "value".
           Use 0 for off, 1 for error, 2 for warn, 3 for info, and 4 for verbose. -->
      <!-- <add name="EqtTraceLevel" value="2" /> -->
   <add name="EqtTraceLevel" value="4" />
    </switches>
  </system.diagnostics>

<appSettings>
    <add key="CreateTraceListener" value="yes"/>
  </appSettings>

and then VSTTAgentProcess.log reports the following:

376, 1, 2012/10/24, 15:23:19.683, GMT00560\QTAgent32.exe, DefaultListener removed
I, 7376, 1, 2012/10/24, 15:23:19.691, GMT00560\QTAgent32.exe, Create trace listener in path: C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE FileName: VSTTAgentProcess.log
I, 7376, 1, 2012/10/24, 15:23:19.693, GMT00560\QTAgent32.exe, AgentProcess: starting
V, 7376, 1, 2012/10/24, 15:23:19.885, GMT00560\QTAgent32.exe, ExecutionUtilities.RegisterIpcClientServerChannel: Registering IPC channel on IPC port 'eqt-a3a97055-63d2-6030-9889-4dabb07454ff'
V, 7376, 1, 2012/10/24, 15:23:19.977, GMT00560\QTAgent32.exe, ExecutionUtilities.RegisterIpcClientServerChannel: Successfully registered IPC channel on IPC port 'eqt-a3a97055-63d2-6030-9889-4dabb07454ff'
I, 7376, 1, 2012/10/24, 15:23:20.358, GMT00560\QTAgent32.exe, AgentProcess: Connecting to ipc://eqt-113ff837-7a8d-bbfa-ae5b-ec4faf4c7e57/AgentProcessManagerAccessManager.rem
V, 7376, 1, 2012/10/24, 15:23:20.380, GMT00560\QTAgent32.exe, AgentProcess: Calling GetAgentProcessProxy. Execution key: d1358a54-eee6-4108-90d8-0ae2c58092c5
E, 7376, 1, 2012/10/24, 15:23:20.744, GMT00560\QTAgent32.exe, AgentProcess: could not register with agent service: System.InvalidProgramException: Common Language Runtime detected an invalid program.
   at System.Uri.ParseScheme(String uriString, Flags& flags, UriParser& syntax)
   at System.Uri.CreateThis(String uri, Boolean dontEscape, UriKind uriKind)
   at System.Uri..ctor(String uriString)
   at Microsoft.VisualStudio.TestTools.Execution.ExecutionUtilities.GetChannelUri(IChannelReceiver receiver)
   at Microsoft.VisualStudio.TestTools.Execution.AgentContext.AddAgentProcessChannelUri(IChannelReceiver receiver)
   at Microsoft.VisualStudio.TestTools.Agent.AgentProcess.Run()
I, 7376, 1, 2012/10/24, 15:23:20.785, GMT00560\QTAgent32.exe, AgentProcess: shutting down.
I, 7376, 1, 2012/10/24, 15:23:20.789, GMT00560\QTAgent32.exe, AgentProcess: exiting

In the end I ran devenv.exe /safemode and by a process of elimination discovered that it was the Typemock Isolator add-in! I am running Typemock Isolator 6.1.2.

Our support agreement just expired. Is that conincidence? Does this add-in require a continual license to operate?

Test host process exited unexpectedly - Failed to queue test run: Unavle to start the agent process

All my unit tests started failing on VS2010. I don't know whether this was something to do with a VS2012 / .NET 4.5 install - I presume not as I'm convinced they were working after this.

Anyhow, they just stopped working.




























After 3 hours of investigation I found the way to get them working was to set
Test > Edit Test Settings > Hosts > Run tests in 32 bit or 64 bit process : Set to Run tests in 64 bit process on 64 bit machine


 
Note 32-bit debugging settings can be found in C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\QTAgent.exe.config. You can set the debug level to:

<switches>
      <!-- You must use integral values for "value".
           Use 0 for off, 1 for error, 2 for warn, 3 for info, and 4 for verbose. -->
      <add name="EqtTraceLevel" value="4" />
    </switches>

Use ProcMon to check that it is the QTAgent.exe that is being executed. VS2012 uses a different process.

Thursday 11 October 2012

Telerik MVC extensions and JQuery UI - Object doesn't support this property or method - JQuery conflict

I had a problem where I had an MVC project that used the Telerik MVC Grid.

This uses jquery-1.7.1-min.js.

But I also updatd my project folder to include jquery-1.8.2.js elsewhere in order to use JQuery UI's tooltips.

When I ran the project I got a Javascript error:

Object doesn't support this property or method on
$(document).tooltip();
This was because JQuery was referenced twice in the the HTML outputs.

So I then deleted the  jquery-1.7.1-min.js that was present in Telerik's 2012.2.607 folder. However this then caused another problem whereby javascript was complaining that

0x800a01b6 - Microsoft JScript runtime error: Object doesn't support property or method 'tDropDownList'

This was because Telerik's JQuery was removed.

The solution was to use:
@Scripts.Render("~/bundles/jquery")

at the top of the HTML page and set JQuery(false) on Telerik:

@(Html.Telerik().ScriptRegistrar().Globalization(true).jQuery(false).DefaultGroup(group => group.Combined(true).Compress(true)))

This results in JQuery being added at the top of the HTML and prevents Telerik adding another script tag.

Telerik MVC Extensions Grid - AJAX in cell editing (batch editing) with readonly columns

If you are using AJAX in-place editing, if a column is readonly then the value will not be passed to the controller on the update AJAX method.

The solution is to remove the ReadOnly attribute from the datasource viewmodel, and instead apply it to the column.

ViewModel
public class TransactionViewModel {
[  [ReadOnly(true)]
  public Int32 DealNum { get; set; }
}

View
columns.Bound(m => m.DealNum).Width(70).Title("Deal No.").ReadOnly();

Other references
http://www.telerik.com/community/forums/aspnet-mvc/grid/mvc-grid-readonly-field-gets-null-when-updated.aspx

Tuesday 2 October 2012

EF5.0 Code First - The model backing the 'xxxContext' context has changed since the database was created. Consider using Code First Migrations to update the database

I created a simple bit of code to load data from an existing table into a POCO. However as I was developing the POCO I got this error.

The model backing the 'xxxContext' context has changed since the database was created. Consider using Code First Migrations to update the database

This is because EF creates a table [dbo].[__MigrationHistory] when it is first run and stores the model in there. If you subsequently change the POCO it differs from the saved model and this error is thrown.

My solution was to delete the table and thus any reference to the historical model.

using (var context = new EndurContext())

{

using (var conn = new SqlConnection(connections["EndurContext"]))

{

using (var command = conn.CreateCommand())

{

command.CommandType =
CommandType.StoredProcedure;

command.CommandText = "[trade].[GetPhysFixedDeals]";

conn.Open();

using (var reader = command.ExecuteReader(CommandBehavior.CloseConnection))

{

var jobs =

((IObjectContextAdapter) context).ObjectContext.Translate<PhysicalFixedDeal>(reader);

}

}

}

}

Friday 21 September 2012

HandleErrorAttribute, Elmah and the Yellow Screen of Death

If you apply the HandleErrorAttribute to an MVC application then it will redirect you to the Error page rather than serve up the Yellow Screen of Death.

But customErrors must be turned on for this to work.

<system.web>
  <customErrors mode="On">
  </customErrors>
</system.web>
Doing this will handle exceptions (e.g. the ExceptionContext's ExceptionHandled property will be set to true).

public class CustomHandleErrorAttribute : HandleErrorAttribute
{
       public override void OnException(ExceptionContext context)
{

          
base.OnException(context);


           // context.ExceptionHandled will return true
if (!context.ExceptionHandled)
return;
          
}

Elmah will only pickup on UNHANDLED exceptions. Under normal circumstances it won't do anything with the UI, it will leave the Yellow Screen of Death intact if one is served. However it will pickup the error and email and log as appropriate.

To use Elmah in conjunction with the HandleErrorAttribute you have to add code to fire off Elmah even though the exception was handled.

A simple example is shown below (with a CustomHandleErrorAttribute)

public
class CustomHandleErrorAttribute : HandleErrorAttribute

{

private readonly ILog log;

public CustomHandleErrorAttribute(ILog log)

{

this.log = log;

}

public override void OnException(ExceptionContext context)

{

// Give the framework a chance to handle the exception

base.OnException(context);

// If the exception was not handled then allow the exception to bubble up - Elmah will catch it

if (!context.ExceptionHandled) return;

// If the exception was handled by the framework then log it for the record

var httpContext = context.HttpContext.ApplicationInstance.Context;

var signal = ErrorSignal.FromContext(httpContext);

signal.Raise(context.Exception, httpContext);

log.Error(context.Exception);

}

}
References:

http://blog.dantup.com/2009/04/aspnet-mvc-handleerror-attribute-custom.html

Wednesday 19 September 2012

403 Access is denied due to invalid credentials, MVC3, IIS7

I had a MVC3 application that compiled and ran fine locally, but when deployed to the UAT server running IIS7, the application failed and returned a 403 Access Denied error.

After a long debugging session I determined this was due to the fact that certain MVC DLLs were not present on the target machine. One way to get round this is to explicitly add them as references to the Web project, and select Copy Local to true.

The DLLs I had to add were

System.Web.WebPages
System.Web.WebPages.Razor

Other reasons why it may return such an unhelpful error are:

the application errors in the startup routines
the directory permissions are not correct
Windows authentication is not enabled for the application in IIS7

....and others

Visual Studio remote debugger is version sensitive

The Visual Studio remote debugger has to be connected to the same version as the main IDE.
You cannot connect the 2010 remote debugger to VS.NET 2012, and vice-versa!

Thursday 13 September 2012

Watching DVDs on the train

To rip a DVD for watching on the train:

1. Use DVD Decrypter to extract the contents of the DVD onto the file system.
2. Then use Handbrake to convert it into a watchable format.

For extracting BluRay discs, you can use MakeMKV.

Thursday 30 August 2012

Installing .NET 4.5 (or VS.NET 2012) causes MSB3270

After installing .NET 4.5 (or VS.NET2012 which installs the former), it is possible that previous projects build in VS.NET2010 start warning:

MSB3270: There was a mismatch between the processor architecture of the project being built "MSIL" and the processor architecture of the reference

This is a legitimate warning. It warns about platform mismatches; for example in my case a project that targets "AnyCPU", but has an x86 dependency within it. As I said, this is a legitimate warning, but it is a bit to naggy in our circumstances: although the project is built with an x86 dependency, at runtime a x64-bit version is loaded on x64 machines. You could argue this is not the right way to do it and in fact entirely separate x64 and x86 builds should exist, but this is the way it is.

Fortunately there is a workaround as described in the .NET 4.5 readme:


3 .Net Framework Product Issues

1.3.1 General Issues

1.3.1.1 After the .NET Framework 4.5 is installed, the MSB3270 warning occurs in builds in Visual Studio 2010
Some project types in Visual Studio 2010 - for example, C++ projects - set the Platform property but don't set the PlatformTarget property. When PlatformTarget isn't set, MSBuild can't determine the project architeture - for example, x86. Instead, it assumes that the project target is Any CPU and throws architecture-mismatch warning MSB3270 even if the project and references are targeting the same architecture.
To resolve this issue:
  • In the project file, set the PlatformTarget property to the appropriate value - for example:
    <PropertyGroup>
    <PlatformTarget>x86</PlatformTarget>
    </PropertyGroup>
-or-
  • Ignore the warning.
-or-
  • In the project file, suppress the warning:
    <PropertyGroup>
    <ResolveAssemblyWarnOrErrorOnTargetArchitectureMismatch>None</ResolveAssemblyWarnOrErrorOnTargetArchitectureMismatch>
    </PropertyGroup>

Friday 3 August 2012

SSIS Merge Join fails with indeterminate results

With an SSIS Merge join, the ordering of the columns on which the join takes place is critical. Get them wrong - by defining the IsSorted property to true on the data source and setting the wrong SortKey properties - and you get interderminate results. These are the worst kind - there are no errors and your join fails occassionally (depending upon the data being operated upon).

The safest thing to do is to manually sort the columns within the SSIS package using the Sort task. Sure, it may consume some additional CPU cycles, but it'll save you a world of pain when some data is (or isn't) being returned from the merge output when you least expected it.

Sunday 29 July 2012

WCF date time serialization

I just encountered a horrible problem where my WCF service was doing all the hard work calculating hourly volumes for long and short days with respect to Daylight Savings Time. Results were being returned as two properties, StartTimeLocal and StartTimeUtc.

My unit tests all worked fine.

But when the clients were consuming the service a problem was being encountered. The times were different and causing problems.

In the end I realised - because the DateTime was defaulting to DateTimeKind.Local, the client was taking the property named UTC, assuming it was local time and adjusting it!

The rule is - it is safest to return your DateTimes as DateTimeKind.Utc - that way the client will not make assumptions about timezone conversions.

Monday 16 July 2012

SSIS Merge Not Working - sort order

I discovered an error scenario with SSIS merge joins.

I had a merge join on two datasets, based upon a join on two columns: COL_A and COL_B.
The merge join tool requires the inputs to be sorted, which they were.

However the change I made required a change to the order sequence.
Instead of it being Col_A = 1, Col_B = 2 it was changed to Col_B = 1, Col_A = 2.

The order sequence was specified logically in SSIS properties of the input data sources. However the SQL query for the 2nd data source was not updated, so it was left as

ORDER BY COL_A, COL_B

No errors were reported as they were both of type INT. However the JOIN failed and rows were being excluded that were present before.

So it is important to ensure that the data is PHYSICALLY sorted (either by using the SORT tool) or by the SQL query itself, not just sequence it in the properties of the data source. This is noted here with a warning:

If the sort options indicate that the data is sorted, but the data is not actually sorted, the results of the merge or merge join operation are unpredictable.

 It's just a bit nasty because if you make this mistake, there are no warnings or errors and the first indication that something is wrong is when the output isn't what you expect.

You can also conclude that an SSIS join is different from a SQL join - it is dependent upon column positions rather than column names.


Thursday 5 July 2012

Self-hosted WCF service, using Windows Authentication, running under a user account

So here is the summary.

If you have a self-hosted WCF service, running under a user account, authenticating clients using Windows Authentication, which is coded manually (not using configuration) you must follow the rules in order to avoid the dreaded 401 Unauthorized errors.

1. The service endpoint must specify a UPN

  var endpointAddress = new EndpointAddress(
                new Uri(ConfigurationManager.AppSettings["WebServiceUrl"]),
                EndpointIdentity.CreateUpnIdentity(WindowsIdentity.GetCurrent().Name));

2. The client needs to connect using the same UPN. In other words it needs to pass the user account name that the server is running under.

var channelFactory = new ChannelFactory<IVolumeService>(
                binding2,
                new EndpointAddress(new Uri(connection.Url) 
EndpointIdentity.CreateUpnIdentity("the user account name that the server is running under")));

If you generated a normal service with a Metadata endpoint, you would see this in the WSDL:

<Identity xmlns="http://schemas.xmlsoap.org/ws/2006/02/addressingidentity">
  <Upn>APOTTS@MyDomain.intra</Upn>
</Identity>

And if you generated a client from the WSDL you'd see this hidden somewhere in the client proxy.

The UPN would be the user name of the domain account under which the server was running. The server validates what the client sends to ensure they can both trust each other before it tries to raise a Kerberos ticket.



WCF - System.Net.WebException: The remote server returned an error: (401) Unauthorized

Further to my previous post about SPNs and WCF applications, there is a further twist in the tale.

As I reminder, I discussed beforehand how I had a self-hosted WCF application that use Windows Security as the Authorisation mechanism. When I deployed this to a remote server, clients could not authenticate correctly unless they had the SPN set in the client code:

var channelFactory = new ChannelFactory<IVolumeService>(
                binding2,
                new EndpointAddress(new Uri(url), new SpnEndpointIdentity("MYSERVICE/MyMachine")
                    ));

Without this setting they would return a 401 Unauthorised, with the detail:
The HTTP request is unauthorized with client authentication scheme 'Negotiate'.



The authentication header received from the server was 'Negotiate oW8wbaADCgEBom
YEZGBiBgkqhkiG9xIBAgIDAH5TMFGgAwIBBaEDAgEepBEYDzIwMTIwNjI5MTY1NjQ1WqUFAgMOYYqmAw
IBKakRGw9HQVpQUk9NVUsuSU5UUkGqEzARoAMCAQGhCjAIGwZhcG90dHM='.
The remote server returned an error: (401) Unauthorized.
The target principal name is incorrect

Adding the SpnEndpointIdentity allowed the clients to work with this remote server.
However, the twist was this: when I went back to working locally, with a local client and a local server running under my current user (with admin privileges) account - it no longer worked!
This time I would get
System.Net.WebException: The remote server returned an error: (401) Unauthorized



.
   at System.Net.HttpWebRequest.GetResponse()
   at System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpCha
nnelRequest.WaitForReply(TimeSpan timeout)

Note there was no The target principal name is incorrect message. It took me a while to figure it, but in the end REMOVING
new SpnEndpointIdentity("MYSERVICE/MyMachine")
got it to work locally! Bah! More reading on SPNs is required.

....

Double Bah! I figured it. It was because I had left a bit of code on the server




var endpointAddress = new EndpointAddress(
                new Uri(ConfigurationManager.AppSettings["WebServiceUrl"]),
                EndpointIdentity.CreateSpnIdentity("HOST/servername.domainname:9011"));

and clearly the SpnIdentity in this mismatched the client. Setting BOTH to be the same caused it to work again.

No, this still didn't work. The solution was to use a UPN rather than a SPN, because the account was running under a user account and not localnetwork or system. Both server and client used:

 var endpointAddress = new EndpointAddress(
                new Uri(ConfigurationManager.AppSettings["WebServiceUrl"]),
                EndpointIdentity.CreateUpnIdentity(WindowsIdentity.GetCurrent().Name));

A hint was also shown by looking at the WSDL of an existing endpoint that ran under a user account:

  <Identity xmlns="http://schemas.xmlsoap.org/ws/2006/02/addressingidentity">
  <Upn>MYUSER@mydomain.intra</Upn>
  </Identity>
Lessons learnt:
  1. Self hosting WCF services is more painful than hosting them in IIS.
  2. Try and use the configuration approach as much as possible - there are more examples out there.
  3. If you don't publish a metadata exchange you have to bind manually, and this provides more chances to mismatch.
  4. Programmatically defining and consuming the endpoints is less well documented - and self hosting with Windows Authentication and SPNs is sparsely documented!
  5. Generating your client from the metadata exposed from the server is much safer. 

References:
http://msdn.microsoft.com/en-us/library/ms733130.aspx
http://msdn.microsoft.com/en-us/library/ms677601%28v=vs.85%29.aspx
http://social.msdn.microsoft.com/forums/en-US/wcf/thread/78638457-ca7a-4f88-b8a9-9bc32d4b5c7d/
http://msdn.microsoft.com/en-us/library/bb628618.aspx