Wednesday, 28 December 2016

Beyond Compare with TFS

The article describing how to compare various Version Control Systems with BeyondCompare is here.

TEAM FOUNDATION SERVER (TFS)

Diff

  1. In Visual Studio Choose Options from the Tools menu.
  2. Expand Source Control in the treeview.
  3. Click Visual Studio Team Foundation Server in the treeview.
  4. Click the Configure User Tools button.
  5. Click the Add button.
  6. Enter ".*" in the Extension edit.
  7. Choose Compare in the Operation combobox.
  8. Enter the path to BComp.exe in the Command edit.
  9. In the Arguments edit, use:
    %1 %2 /title1=%6 /title2=%7

3-way Merge Pro only

  1. Follow steps 1-6 above.
  2. Choose Merge in the Operation combobox.
  3. Enter the path to BComp.exe in the Command edit.
  4. In the Arguments edit, use:
    %1 %2 %3 %4 /title1=%6 /title2=%7 /title3=%8 /title4=%9

2-way Merge

Use the same steps as the 3-way merge above, but use the command line:
%1 %2 /savetarget=%4 /title1=%6 /title2=%7

Monday, 19 December 2016

Device Manager driver update gets stuck at "Searching Online for Software"

https://answers.microsoft.com/en-us/windows/forum/windows_8-update/windows-update-not-updating-stuck-on-checking/da51ddbc-40ff-4ed9-b2a7-9381843728a1

Wednesday, 14 December 2016

Event Store - my learnings

I have been evaluating Event Store and thought I'd write my learnings. The context is based upon an Order Processing system, which receives Orders (OrderCreated) and processes them through their lifecycle.

 1. Hard deletes are really hard! If you are testing and creating streams and deleting them, if you do a hard delete you really CANNOT recreate a stream. Of course, this is not a natural thing to do in a real world environment, but beware if playing around. The only way to get around it is to use a new DB. 

2. The browser window does a lot of caching. I (soft) deleted a stream and then went back to the Stream Browser window. It still displayed the stream I deleted and when I clicked through it displayed the events within the stream. This led me to hard delete it which caused other problems. In reality, the browser is caching all the information. Either use Developer Tools to clear the cache or use the API: the stream HAS been deleted.

3. When modifying a projection, reset it. I had errors when I modified a projection and didn't reset it. 

4. Consistency is guaranteed within a stream, not across them. If in pseudocode you write:

for (var i=1; i<6; i++)
{
    CreateStream(i);
}
you will not necessarily see 5,4,3,2,1 in the Most Recently Created Streams and projections upon the Streams will be sequenced accordingly.
This is because the parallelism of the writes means that any that were initiated at almost identical times will be written to the log as each task completes. If you were to add a small sleep in the for loop you would see them being sequenced in the same order as the API calls.

Remember, consistency is guaranteed WITHIN a stream, not ACROSS a stream. Events within a stream will be written in the sequence that they are received.

This consistency guarantee may help you decide how to arrange your streams. If my stream was AllOrderEvents, and it contained all of the events for all of the orders, then this may incur a performance hit as concurrent writers will all be trying to write into one log for all orders and they may get version conflicts. We don't care about the exact sequencing of one order to another, but we do care about the sequencing of events within an order.

Furthermore, remember that in a distributed event-driven system, we should not rely upon Message Ordering. Some message buses may provide ordered messaging, but they will do this at a cost to performance and storage.

Event Store - the fromCategory projection does not work / does not project

I wrote a projection to enumerate all streams beginning "Order" and pull out how many orders were created (by having the event OrderCreated in their stream). This was a simple way of building up a projection of orders in the system.

fromCategory("Order")
    .foreachStream()
   .when({
        $init: function() {
           return { count: 0 }
        },
        "OrderCreated": function(state, event) {
          emit('newOrders', 'order', { 'orderReference' : '1' })
        }
  })


And for love nor money I could not get this to work.

In the end this answer steered me: you have to config EventStore not only to run projections but also to start the standard projections.
EventStore.ClusterNode.exe --run-projections=all --start-standard-projections=true

Saturday, 10 December 2016

Unturned: Running a Console Application with arguments as a Windows Service

Unturned requires you to run a console application as a server.
If you want to start this automatically when a Windows Server restarts without any GUI interaction then it is necessary to run it as a Windows Service.

I tried a couple of tools and found that nssm.exe worked best. It allows you to specify a Console.exe and include startup parameters.
Other tools such as RunAsService could not handle the fact that the service did not respond in a timely fashion to the Service Control Manager.

Wednesday, 7 December 2016

Fat vs thin events

StackOverflow has a good article discussing the pros and cons of fat vs thin events. I thought I'd repeat it here.

The question raised was:

"When raising an event, which pattern is the most suited:
  1. Name the event "CustomerUpdate" and include all information (updated or not) about the customer
  2. Name the event "CustomerUpdate" and include only information that have really been updated
  3. Name the event "CustomerUpdate" and include minimum information (Identifier) and/or a URI to let the consumer retrieves information about this Customer.
I ask the question because some of our events could be heavy and frequent."
And the response was:
Name the event "CustomerUpdate"
First let's start with your event name. The purpose of an event is to describe something which has already happened. This is different from a command, which is to issue an instruction for something yet to happen.
Your event name "CustomerUpdate" sounds ambiguous in this respect, as it could be describing something in the past or something in the future.
CustomerUpdated would be better, but even then, Updated is another ambiguous term and is nonspecific in a business context. Why was the customer updated in this instance? Was it because they changed their payment details? Moved home? Were they upgraded from silver to gold status? Events can be made as specific as needed.
This may seem at first to be overthinking, but event naming becomes especially relevant as you remove data and context from the event payload, moving more toward skinny events (the "option 3" from your question, which I discuss below).
That is not to suggest that it is always appropriate to define events at this level of granularity, only that it is an avenue which is open to you early on in the project which may pay dividends later on (or may swamp you with thousands of event types).
Going back to your actual question, let's take each of your options in turn:
Name the event "CustomerUpdate" and include all information (updated or not) about the customer
Let's call this "pattern" the Fat message.
Fat messages represent the state of the described entity at a given point in time with all the event context present in the payload. They are interesting because the message itself represents the contract between service and consumer. They can be used for communicating changes of state between business domains, where it may be preferred that all event context be present during message processing by the consumer.
Advantages:
  • Self-consistent - can be consumed entirely without knowledge of other systems.
  • Simple to consume (upsert).
Disadvantages:
  • Brittle - the contract between service and consumer is coupled to the message itself.
  • Easy to overwrite current data with old data if messages arrive in the wrong order.
  • Large.
Name the event "CustomerUpdate" and include only information that have really been updated
Let's call this pattern the Delta message.
Deltas are similar to fat messages in many ways, though they are generally more complex to generate and consume. Because they are only a partial description of the event entity, deltas also come with a built-in "assumption" that the consumer knows something about the event being described. For this reason, they may be less suitable for sending outside a business domain, where the event entity may not be well known.
Advantages:
  • Smaller than Fat messages
Disadvantages:
  • Brittle - similar to the Fat message, the contract is embedded in the message.
  • Easy to overwrite current data with old data.
  • Complex to generate and consume
Name the event "CustomerUpdate" and include minimum information (Identifier) and/or a URI to let the consumer retrieves information about this Customer.
Let's call this the Skinny message.
Skinny messages are different from the other message patterns you have defined, in that the service/consumer contract is no longer explicit in the message, but implied in that at some later time the consumer will retrieve the event context. This decouples the contract and the message exchange, which is a good thing.
This may or may not lend itself well to cross-business domain communication of events, depending on how your enterprise is set up. Because the event payload is so small (usually just an ID), there is no context other than the name of the event on which the consumer can base processing decisions; therefore it becomes more important to make sure the event is named appropriately, especially if there are multiple ways a consumer could respond to a CustomerUpdated message.
Additionally it may not be good practice to include an actual resource address in the event data - because events are things which have already happened, event messages are generally immutable and therefore any information in the event should be true forever in case the events need to be replayed. In this instance a resource address could easily become obsolete and events would not be re-playable.
Advantages:
  • Decouples service contract from message.
  • Information about the event contained in the event name.
  • Naturally idempotent (with time-stamp).
  • Generally tiny.
  • Simple to generate and consume.
Disadvantages:
  • Consumer must make additional call to retrieve event context - requires explicit knowledge of other systems.
  • Event context may have become obsolete at the point where the consumer retrieves it, making this approach generally unsuitable for some real-time applications.
When raising an event, which pattern is the most suited?
I hope you have by now realised that the answer to this is it depends. I will stop here - yours is a great question because you could probably write a book while answering it, but I hope you found this answer helpful.

Saturday, 19 November 2016

NServiceBus Saga persisted with NHibernate errors with "The following types may not be used as proxies"

If you use an NServiceBus Saga persisted with NHibernate and you use the Saga Data as per the sample code it errors with "The following types may not be used as proxies"

The sample code is:

public class OrderSagaData :
    IContainSagaData
{
    public Guid Id { get; set; }
    public string Originator { get; set; }
    public string OriginalMessageId { get; set; }
    [Unique]
    public string OrderId { get; set; }
}

The solution is to mark all the properties as virtual.

Thursday, 17 November 2016

NServiceBus doesn't create TimeoutEntity or Subscription databases, even in Integration mode

Note: this refers to NServiceBus 4.6.5. Other versions may differ.

I had a problem whereby I was running an NServiceBus server on a new machine. It was using the
NHibernate ORM with SQL Server as the persistence store.

However, even with the endpoint running in NServiceBus.Integration mode, I found that the underlying databases were not being created. On starting the endpoint it would throw an ADOException with messages like

"could not execute query\r\n[ SELECT this_.Id as y0_, this_.Time as y1_ FROM TimeoutEntity this_ WHERE this_.Endpoint = @p0 and (this_.Time >= @p1 and this_.Time <= @p2) ORDER BY this_.Time asc ]\r\n  Name:cp0 - Value:SchemaCreator  Name:cp1 - Value:17/11/2006 15:10:37  Name:cp2 - Value:17/11/2016 15:10:37\r\n[SQL: SELECT this_.Id as y0_, this_.Time as y1_ FROM TimeoutEntity this_ WHERE this_.Endpoint = @p0 and (this_.Time >= @p1 and this_.Time <= @p2) ORDER BY this_.Time asc]"

as it was unable to read from the (not yet created) database.

The startup code was below:


        
        private static IBus SetupBus()
        {
            Configure.Serialization.Json();
            return Configure
                    .With(GetAssembliesToScan())
                    .DefaultBuilder()
                    .UseNHibernateSubscriptionPersister()
                    .UseNHibernateTimeoutPersister()
                    .UseNHibernateSagaPersister()
                    .UseNHibernateGatewayPersister()
                    .UseInMemoryGatewayDeduplication()
                    // Hack the unobtrusive conventions to force the console to work.
                    .DefiningEventsAs(t => t.Namespace != null && t.Namespace.StartsWith("Orders.Contracts"))
                    .UnicastBus()

                    // Load handlers to allow subscriptions to be set up.
                    .LoadMessageHandlers()
                    .CreateBus()
                    .Start(() => Configure.Instance.ForInstallationOn().Install());
        }

        private static Assembly[] GetAssembliesToScan()
        {
            return new[]
            {
                typeof(IOrder).Assembly,
                typeof(Program).Assembly
            };
        }

In the end after stepping through lots of NServiceBus and NServiceBus.NHibernate code I found the answer to the problem.
NServiceBus scans assemblies for code implementing the INeedToInstallSomething interface. The NServiceBus.Unicast.Subscriptions.NHibernate.Installer.Installer class implements this interface and it is this which calls the NHibernate SchemaUpdate.Execute() method.

However, despite the application's XML configuration referring to NHibernate and the code above explicitly defining the NHibernate persister, the installation code is not run because the implementing classes have not been scanned.

The solution is to modify the code GetAssembliesToScan method above to include the NServiceBus.NHibernate assembly.


        
        private static Assembly[] GetAssembliesToScan()
        {
            return new[]
            {
                typeof(IOrder).Assembly,
                typeof(Program).Assembly,
                typeof(NServiceBus.Unicast.Subscriptions.NHibernate.Installer.Installer).Assembly
            };
        }

Additionally, if your endpoint works with a Saga you need to scan the assembly that contains the SagaData.

public static IEnumerable AssembliesToScan
        {
            get
            {
                yield return typeof(IOrderCreated).Assembly;
                yield return typeof(OrderSagaData).Assembly;
                yield return typeof(NServiceBus.Unicast.Subscriptions.NHibernate.Installer.Installer).Assembly;
            }
        }

Wednesday, 16 November 2016

Samsung Galaxy 5 (SM-900F) doesn't support 4G in the USA

Having paid for a data upgrade for EE in the USA - I discovered the Samsung Galaxy 5 - SM900F, doesn't support the 4G frequencies in the USA.

Friday, 4 November 2016

Virgin Media CGNV4 - a security risk?

I've noticed that the Hitron CGNV4 router used by Virgin Media for its high-speed broadband has port 8080 open. So far I've been on the phone to them for 45 minutes and nobody knows what it's for!

Sunday, 30 October 2016

Updating the Galaxy Tab S to support adopted storage

Grrr. Samsung have purposfully disabled adopted storage for the Galaxy Tab S, meaning that all of the 16GB rapidly fills, especially when app developers lazily prevent the app to be moved to external storage.

I tried connecting the tablet via USB and running the
adb shell sm set-force-adoptable true
command, but this doesn't work.

There are steps to configure Adopted Storage which involves TWRP and the Storage Enabler. And it appeared to work in large - after following the steps I had the option to install a card as internal memory. But I had a random error on the Settings screen after formatting the SD card and also when copying files to the phone it still said "Out of disk space" error. At that point, after 3 hours of trying I gave up and got on with my life.

Lessons learned:
Samsung create great phones and tablets, but the firmware sucks. And there isn't a phone out there at the moment that supports adoptable storage and a replaceable battery, and no tablet supporting adoptable storage. It's as if they want you to have their products with a very short usefulness lifespan, if not a physical lifespan.

References
https://nelenkov.blogspot.co.uk/2015/06/decrypting-android-m-adopted-storage.html?view=flipcard
http://forum.xda-developers.com/galaxy-tab-s/general/patch-adoptable-storage-enabler-t3460478/page4
http://odindownload.com/
http://www.modaco.com/news/android/heres-how-to-configure-adoptable-storage-on-your-s7-s7-edge-r1632/

Updating the Galaxy Tab S to support adopted storage

Grrr. Samsung have purposfully disabled adopted storage for the Galaxy Tab S, meaning that all of the 16GB rapidly fills, especially when app developers lazily prevent the app to be moved to external storage.

I tried connecting the tablet via USB and running the
adb shell sm set-force-adoptable true
command, but this doesn't work.

There are steps to configure Adopted Storage which involves TWRP and the Storage Enabler. And it appeared to work in large - after following the steps I had the option to install a card as internal memory. But I had a random error on the Settings screen after formatting the SD card and also when copying files to the phone it still said "Out of disk space" error. At that point, after 3 hours of trying I gave up and got on with my life.

Lessons learned:
Samsung create great phones and tablets, but the firmware sucks. And there isn't a phone out there at the moment that supports adoptable storage and a replaceable battery, and no tablet supporting adoptable storage. It's as if they want you to have their products with a very short usefulness lifespan, if not a physical lifespan.

References
https://nelenkov.blogspot.co.uk/2015/06/decrypting-android-m-adopted-storage.html?view=flipcard
http://forum.xda-developers.com/galaxy-tab-s/general/patch-adoptable-storage-enabler-t3460478/page4
http://odindownload.com/
http://www.modaco.com/news/android/heres-how-to-configure-adoptable-storage-on-your-s7-s7-edge-r1632/

Galaxy Tab S stuck in "Downloading" mode

Galaxy Tab S stuck in "Downloading" mode:

Press: Power + Vol Down + Home for 20 seconds

Recovery Mode
Turn off the device
Press and hold Volume UP key + Home Key
then Press and hold Power key
Release all key when you see Android System Recovery
Use Volume UP key and Volume Down key to select Menu
Use Power key to Confirm or Execute Menu

Download Mode or ODIN Mode
Turn off the device
Press and hold Volume Down key + Home key
then Press and hold Power key
Release all key when you see ODIN Mode
Use Volume Up key to continue
Use Volume Down key to cancel ( Restart the device )

Odin is a tool for updating the firmware on Samsung devices.

Monday, 17 October 2016

Distributed Systems: de-duplication and idempotency

When working on distributed systems there are two important concepts to keep in mind - deduplication and idempotency. The two are sometimes confused but there are subtle differences between them.

Background Context
In our e-commerce system, we receive orders submitted from Checkout and manage them through a lifecycle of Payment Authorisation, Fraud Check, Billing through to Dispatch. We employ asynchronous message-driven architecture using a combination of technologies such as NServiceBus, MSMQ and Azure Service Bus. We use a combination of workflow choreography and orchestration. In this lightweight handlers receive a command or subscribe to an event, do some work, and publish a resulting message, These are chained together with other handlers to complete the overall workflow.

At-Least Once Messaging
Messaging systems typically offer at-least-once messaging, That means you are guaranteed to get a message once but under certain circumstances - such as failure modes - you may get the message twice (perhaps on different independent threads of execution).

Avoiding certain work twice
However, some work items should not be repeated: you should only charge a customer once or perform a fraud check once. If our handler receives a PaymentAccepted event it performs a Fraud Check with a third party service. This costs money and repeatedly calling it will affect a customers fraud score - it should only be done once per order.

Deduplication
Some people employ deduplication to solve this. For example, Azure Service Bus gives you the option to deduplicate messages by setting a time window (say 10 minutes). In this Azure Service Bus will check every message ID and if it sees it has already been processed it will prevent another handler processing it. This is possible because ultimately the broker architecture revolves around a single SQL database (limited to a region).

However, deduplication does not solve all of the problems. Idempotency is very important too,

Idempotency
If we are truly idempotent then a handler should “always produce the same output given the same input”.

Ideally, a handler would be fully idempotent. If the work was to set subscribe to a NewCustomerOrder event and set a new customer flag to true and publish a NewCustomerUpdated event, we should be able to publish this event ten times and the outcome will always be the flag being set to true and a NewCustomerUpdated flag published.

However, in the Fraud Check scenario, we should subscribe to PaymentAccepted event and check history to determine whether to perform a Fraud Check, and always publish the result - normally FraudCheckPassed.

Idempotency supports Replays
This is important for replays. We had a system problem where our system experienced a series of failures that caused messages to be lost at various stages of the workflow. 

This is shown below:


It would have been very useful to go to the leftmost part of the workflow and replay the message: this would have resulted in the downstream handlers responding and repeating their work as required. However many handlers employed deduplication: they simply swallowed the input event, did no work, and published no outcome. That meant latter stages of the workflow were not exercised and the system had to be recovered stage by stage. It required a lot of manual effort, was time consuming and was prone to error.

If each handler was idempotent; it would have received the input event, chosen whether to do work or not, and published the output event.

What are the side effects?
So impact could this have?

In the event of duplicated messages at the transport level (e.g. at least-once messaging) we could end up with more load on the system. If one of the leftmost handlers received a duplicate, it would propagate down the right. However I believe this is a small price worth paying for the enhanced supportability. Furthermore Azure Service Bus, being a broker with message-level locking, will prevent this except for handler failure scenarios.

If we are using Event Sourcing then if we duplicate or replay an event our event streams will record the fact that the FraudCheckPassed was published twice. Arguably, though, this is a good thing. It is a true reflection of history, which is what the event stream should be. It is much better to have two publishes recorded rather than someone manually hacking the event stream to fix problems.

In our current system, we enforce deduplication at the transport layer using Azure Service Bus configuration. However I believe this should be changed anyway; it prevents partitioned queues, it prevents us changing transport layer and anyway the deduplication and idempotency should be enforced at the handler level. 

Deduplication
Note that idempotency and deduplication are separate concerns (although often mixed!). In the event of a Fraud Check or a call to a Payment Processor for billing it is important that these calls are not repeated twice. If we receive a duplicate or replayed message, we don’t call these external systems again. Rather we just republish the last outcome.

Closing argument
Our APIs are idempotent so why aren’t our handlers? When an API receives the same input request, it publishes the same output. This is important if the client enters a tunnel when issuing an HTTP request.
Why aren’t are handlers behaving in this manner?

Tuesday, 11 October 2016

Netscaler Gateway Plugin

The Netscaler Gateway plugin is incompatible with VirtualBox's "VirtualBox Host-Only Ethernet Adapter". You get timeouts when you do a ping test.
When the latter is enabled, the VPN connection intermittently fails to route traffic correctly. Disable the adapter to work with the Netscaler gateway.

Saturday, 8 October 2016

gpedit is the new control panel: Windows Update and Windows Ink.

Today I got rather annoyed by Windows 10. I know it is supposed to cater for a wide range of users including those whom are non-IT savvy, but it has taken "knowing best" to an all new level.

When I turned on my laptop today:

- it spent ages installing a new update
- it overwrote my Windows + W shortcut to launch a new program, Windows Ink Workspace, which I don't want
- it then tried to reboot as I was working

I then spent half an hour trying to find the option to uninstall Windows Ink. You can't. I then tried to find it's option to release my Windows + W shortcut. You can't.

This is too aggressive. I like my computer they way I had it before the update. I don't expect someone to come into my office, replace my pens and chair, and rearrange my desk because they are think they are doing me a favour. I want it like I had it.

So can you reconfigure your Windows 10 to turn off all this rubbish? To ask before installing updates and disable Windows Ink Workspace? Well, no. Not using the front door.

Fortunately there is gpedit. It appears this is becoming the new control panel. The old control panel is for the non-IT savvy users who just want to do the basics.

So to stop Windows Ink Workspace:
http://www.thewindowsclub.com/disable-windows-ink-workspace

To force Windows 10 to be polite and ask before downloading updates over your Internet connection:
http://www.howtogeek.com/224471/how-to-prevent-windows-10-from-automatically-downloading-updates/

Friday, 30 September 2016

Options for exchanging information between domains

There are various approaches for exchanging information between domains in a system.

1. Fat Command, Subordinate System.
The system that holds the information commands a subordinate to do some work. In the command it provides all the information that it needs.
For example an Order Processing System may send a SendCustomerEmail command to a Notification system and that command holds all of the data deemed necessary to populate an email.
The recipient acts immediately upon the receipt of the command.

2. Fat Command, Thin Events
A system sends a command to a second system. That command contains all of the public information that is necessary for the execution of an action, for example SavePaymentCardDetails.
At some point in the future that system, or an unrelated system issues an event that triggers the execution of an action.
An example may be a front end Web application may preload a payment system with SavePaymentCardDetails. The payments domain saves the card details to it's domain repository. Later on the Fraud check system issues a FraudCheckPassed and the Payment domain then bills the credit card.

3. Thin Event, We'll Call You
In this scenario an event is published and the receiving system calls the Web API of the sender (or other systems) to get the information required to perform the action. For example, an Order Despatch system publishes the OrderDispatched event and the subscribing Customer Notification system calls the Order API to get the details of the order so it can send an email to the customer.

4. Fat events
In this scenario an event is published which contains the public information shareable with other domains. For example the OrderCreated event is published which contains the Line Items, Delivery Address, Billing Address etc.
There are cons to this approach:

  • You may be encouraged to publish information that want to hide (addresses)
  • You may not be able to restrict access to that information
  • An implicit coupling may be developed. You may get a whole number of subscribers to your events, some of whom you have no knowledge about, who become heavily dependent upon your events. You then struggle to upgrade or deprecate older versions of the message.

References
http://andreasohlund.net/2012/02/21/talk-putting-your-events-on-a-diet/
http://stackoverflow.com/questions/34545336/event-driven-architecture-and-structure-of-events



Tuesday, 20 September 2016

Changing a password on nested Remote Desktop Sessions

To change a password on a nested remote desktop session:

Start > Run > osk

Hold down the CTRL and ALT keys on the physical keyboard.

Click on the DEL button on the virtual keyboard.

Monday, 12 September 2016

MSMQ Utilities

List all of the private queues:
Get-MsmqQueue –QueueType Private | select QueueName

NServiceBus, "Failed to enlist in a transaction" and MSMQ overload.

We've had several problems, where after a serious fault in our infrastructure, NServiceBus on MSMQ has failed to start up gracefully. On many occasions, it would error for a period of anything up to several hours, before eventually unblocking itself and processing as normal.

The context of this is a very popular ecommerce site and NServiceBus is trying to process anything up to 40,000 queued messages after a period of downtime.

For background context - on two occasions this was total power failure in a commercial data centre, and the other one was due to a SAN disk failing, bringing down total throughput.

Our NServiceBus is on-premise and uses MSMQ as the underlying transport. Some endpoints use MSDTC and enlisted transactions. (As an aside, this is something you should be able to design out. Transactions are not particularly friendly to very distributed messaging systems. With a combination of idempotency, built-in MSMQ transactions and retries you can avoid the need for them).

One of the errors seen was "Cannot enlist the transaction". It is believed that the startup contention for MSMQ causes this. You can turn on MSDTC logging, and this requires the use of the TraceFmt.exe tool to format the logs for human consumption.

You can also turn on System.Transaction tracing.

Once we turned this on we could see repeated errors whereby a transaction was started and 2 minutes later it was aborted. This was happening continuously and causing NServiceBus to fail to startup. After a long period of time (an hour or more) one transaction managed to succesfully complete and this started to allow other messages to follow through.

Our team originally tried to solve this by increasing the MSDTC timeout duration. However this is not enough, underlying this is the System.Transactions timeout. This also needs to be changed.

<system.transactions>
  <defaultSettings timeout="00:05:00"/>
</system.transactions>

The solution is to perform one or more of the following:

  • wait
  • increase the transaction timeout
  • reduce the queue length if it is too big on startup by using a temporary queue and copying messages manually
  • remove the need for MSDTC
  • or fix the underlying performance problem that is hampering MSMQ.

Sunday, 4 September 2016

Publish a local SQL Server database to Azure

In SSMS 2014, select the database and click Tasks > Export Data-Tier Application.
You need to specify a storage account and the key for the account.
It will upload the bacpac to the storage account.

Then in the Azure Portal you can click on Import Database. Select the subscription and the storage account, find the bacpac and then complete the blade to recreate the database.

Visual Studio 2013 - publishing to a different Azure account

I have a work Azure account and a personal Azure account. When using the work Visual Studio 2013 if I try and publish a Web site to my personal Azure account the publish profile dialog shows my work account.

The Sign Out button doesn't work.

However there is a workaround. Go to Server Explorer and "Connect to Azure Subscription".
Enter the personal Azure account details and it will now connect to that.
From now on the publish dialog will use the personal credentials.

Tuesday, 16 August 2016

MVC routing fails and IIS 7.5 returns 404 with StaticFile handler

I had a new build Windows 7 machine with IIS7.5 and encountered an old problem again.
An MVC Web API application I had installed was failing to run and IIS was returning 404 and was indicating that the StaticFile handler was being executed:


The handlers were configured correctly in the web.config:

Running aspnet_regiis -i against the correct .NET version didn't do anything.
People recommend that you don't add the <modules runAllManagedModulesForAllRequests="true"> element to the web.config, so I didn't. What was going wrong?

In the end I found my own blog post! The problem was that the ISAPI filters were disabled by default.
The problem was solved by turning them on.

Friday, 12 August 2016

Tracing complex code in Visual Studio

The following tools are available if you want to trace who is calling a method in complex code:

CodeLens - only available in VS2013 Ultimate or 2015 Enterprise, this will show the references to a class, method or property.

Or the View Call Hierarchy context menu.

Or the Resharper -> Inspect -> Incoming Calls.

Tracing complex code in Visual Studio

The following tools are available if you want to trace who is calling a method in complex code:

CodeLens - only available in VS2013 Ultimate or 2015 Enterprise, this will show the references to a class, method or property.

Or the View Call Hierarchy context menu.

Or the Resharper -> Inspect -> Incoming Calls.

Tuesday, 9 August 2016

Certificate Thumbprint in MMC

The certificate thumbprint in .NET code is equivalent to the the SHA1 Hash in the MMC certificates snap-in.

Certificate Thumbprint in MMC

The certificate thumbprint in .NET code is equivalent to the the SHA1 Hash in the MMC certificates snap-in.

Monday, 25 July 2016

The future of mobile (sort of)

Forget Straight Through Processing, Integration Hub and all that stuff. Yesterday I experienced the future of digital (mobile) computing: Pokemon Go!

After a 3 hour drive home I was confronted by two kids in my living room hassling me to download Pokemon Go “because all our friends have it”. So reluctantly I did, and consoled myself at least it might get them out of the house, away from their tablets and walking a bit.
And so out we went. We had to walk to our local train station where there was a “Pokemon Gym” and then to our church because we could throw some Pokemon balls at a Pokemon, capture him and then “hatch his eggs”. (I’m sure I’m getting this all wrong).

But the thing that struck me was once you’ve done it, you suddenly become aware that everybody else who is walking along the streets in a particular pose; head down, looking at their mobile – is in fact playing Pokemon Go. I counted at least 4 separate groups of people in the church yard trying to capture that elusive Pokemon. Apparently a 40 year old parent I know is the local Pokemon champion. Sad.

It’s quite a fad. Yesterday my son told me that it was on TV that some children had to be rescued by the fire service because they were stuck up a tree searching for a Pokemon.

On the technical side; it really is a good mix of GPS services, augmented reality and networked gameplay. I don’t recommend you try it! (I’m secretly a little bit gutted I had the idea of networked GPS gameplay several years ago but did nothing about it).

The world is evolving. What seems odd today (middle-aged people walking around playing games in the street) suddenly becomes the norm.

But what's the relevance to work? Here in Financial Services we're OK right? We live in a regulated environment with a high barrier to market entry, don't we?

Well, not so fast. In the UK Tandem Bank​ is launching: the second mobile only company to be granted a full UK banking license. (I'm buying some of their shares when they IPO).

Revolut has, er, revolutionised my trips abroad. Interbank foreign exchange rates, even currency trading if you're so inclined. Charge-free cash withdrawals abroad. And immediate feedback to your phone when you make that purchase abroad. I can't see that [wo]man in the currency booth at the airport lasting much longer, nor the excessive fees charged by the banks.

Also we have peer insurance with sites like Guevara.

Suddenly paper-based applications and telephone calls to change addresses feels a little, well, old.

Tech is amongst us. We need to embrace it in our professional lives, or suddenly find ourselves out in the cold. After all, the London Black Cabs was a regulated, closed market with a high barrier to entry. And then along came Uber....​

Friday, 15 July 2016

Android Folders

Default Android drawable folder:

\path-to-your-android-sdk-folder\platforms\android-xx\data\res

Tuesday, 5 July 2016

Android Studio Tips - Making it behave like Visual Studio

If you are a Visual Studio developer then sometimes you will want Android Studio to behave similarly. Here is a list of tips for configuring Android Studio


Highlight the selected tab

You can configure Visual Studio so that the project view always highlights file in the currently selected tab. Fortunately Android Studio supports this too. Click on the settings icon and then select "Auto scroll from source".


List the class members

Unfortunately there is no dropdown list which lists the class's members like in Visual Studio. However CTRL+F12 will bring up a dialog which will.


Thursday, 30 June 2016

Android Async Processing

This great article documents some of the ways in which you can do asynchronous processing on Android.

Wednesday, 1 June 2016

Sunday, 29 May 2016

Preventing the low disc space warning on a HP Touchsmart

The HP Touchsmart comes with a 16.4GB recovery partition. Unfortunately Windows keeps warning you there is no space left. The HP site gives some useless instructions to delete files - but there are no spare, safe files to delete. Fortunately Microsoft enable you to turn off the warning:

https://support.microsoft.com/en-us/kb/555622

Determining the Windows Product ID & Product Key from an existing copy of Windows

The command

systeminfo

will tell you the Product ID.

The command

wmic path softwarelicensingservice get OA3xOriginalProductKey

will tell you the Product Key.

Sunday, 24 April 2016

TurboCAD 20 online activation

You can pick up a copy of TurboCAD 20 for a good price online and it offers some good features if you are after a simple CAD program.

After 30 days it will ask for activation, and you can do either online or via the telephone.
If you can't wait or afford to call US Office Hours, you can try and do an online activation.

Unfortunately the button on the GUI doesn't work.

However, you can do it yourself via the browser.
Navigate to Link: http://activate.imsisoft.com/ and this will allow you to enter your serial number and an email address. It will display an activation code and email it to you also.

Sunday, 10 April 2016

Brother 9450CDN Toner Life End message

Sometimes the Brother 9450CDN reports a Toner Life End message when there is still toner left in the cartridge.

It appears there are two solutions: #1 is to tape a bit of black tape over the window of the toner cartridge, or over the hole for the sensor in the toner tray. I left the tape on just in case it would do good in the future.

The other method is:

  1. With power on, open the toner access main door. You will get a “Cover is Open” message on the LCD.
  2. Press the “Clear/Back” button and you will be taken to the toner “Reset Menu”
  3. You can then scroll through the reset options for the printer’s toner cartridges:
    1. B.TNR-S – Black toner small cartridge (TN-110)
    2. B.TNR-H – Black toner high-capacity cartridge (TN-115)
    3. C.TNR-S – Cyan toner small cartridge (TN-110)
    4. C.TNR-H – Cyan toner high-capacity cartridge (TN-115)
    5. M.TNR-S – Magenta toner small cartridge (TN-110)
    6. M.TNR-H – Magenta toner high-capacity cartridge (TN-115)
    7. Y.TNR-S – Yellow toner small cartridge (TN-110)
    8. Y.TNR-H – Yellow toner high-capacity cartridge (TN-115)
  4. Select the cartridge size you have and the colour you want to reset, and press OK. Since I had small cartridges, I used the S options for all three colours.
  5. Each cartridge must be reset individually. Press “1” to reset.
  6. Press “Clear/Back” to get out of the menu, then close the door.

Sunday, 20 March 2016

BlackVue 650 - Please check the SD card

The DR650GW-2CH like the DR600GW and the Blackvue DR750LW-2CH cannot recognise exFAT format SDXC cards, in order to get the Blackvue DR650GW-2CH to boot up successfully with a 64GB Micro SDXC card inserted you need to format card to FAT32.

To fix this problem, use the BlackVue viewer app and the format function within it.

Thursday, 4 February 2016

At Least Once Messaging, Atomic operations and SQL Server

In a NServiceBus system with at-least-once-messaging, there is always the chance that two handlers are executed with identical copies of a message. Handlers should be idempotent and protect against the same message producing different outcomes.

When we move deeper down to the repository layer, it raises interesting implications. If we have an Upsert style Stored Procedure (Update/Insert), then should this protect against high concurrency situations?

Take the excerpt below from a financial system:

IF NOT EXISTS (SELECT ReceiptId FROM [dbo].[PaymentLedger] WHERE ReceiptId=@ReceiptId)

      BEGIN

            INSERT INTO [dbo].[PaymentLedger]

The Select then Insert/Update semantic above is not protected against concurrent access and this has a [low] potential of happening with our messaging infrastructure.
If two handlers process a copy of the same message we have the potential to do two inserts into the PaymentLedger table.

This article discusses the problem and suggests a fix.
Even the merge command is not immune from this problem. It also demonstrates a good method for replicating the problem by scheduling a SQL command.

We can rely upon the caller above us to protect us from this scenario. However if we want the Stored Procedure to be explicitly safe against high-volume concurrency then the transaction really should be applied at the SQL level and not reply upon the client applying it.
We may not trust our client because

  • may have not applied an idempotency check correctly
  • our endpoints often don’t wrap the SQL in a transaction 
  • or have disabled distributed transactions. 

An example fix is to modify the SQL as follows:
BEGIN TRAN 
IF NOT EXISTS (SELECT ReceiptId FROM [dbo].[PaymentLedger] WITH (UPDLOCK, SERIALIZABLE) WHERE ReceiptId=@ReceiptId)
BEGIN
      INSERT INTO [dbo].[PaymentLedger]
      ....
END
COMMIT TRAN

This change will affect performance very slightly by reducing the throughput of these inserts/updates.

Friday, 29 January 2016

ASP.NET MVC WebApi returned a 404 on a Windows 2003 R2 32-bit server

We promoted an ASP.NET MVC WebAPI project from Dev to Production.
It was working fine in dev and we XCOPYd the files from Dev to Production.
All of the IIS settings were comparable - or so we thought - so what can go wrong?

Well, the famous 404 error can. We were getting a 404 on all requests to the Web API.

We tried various things, like setting the
<modules runAllManagedModulesForAllRequests="true" />
element.

We did an aspnet_regiis.exe -i.

I checked we didn't have to allow 32-bit applications on a 64-bit machine.

We tried setting up a wildcard script map.

None of these worked. Nothing was showing in Event Viewer.

In the end we went to the IIS logs. We noticed the 404 code, but also a subcode 2 and sc-substatus of 1260. This article showed that it was a lockdown policy that prevented the request and the 1260 code indicated ERROR_ACCESS_DISABLED_BY_POLICY.
This was because under the Web Service Extensions branch in IIS Manager, the ASP.NET v4.0.30319 was set to prohibited.

Lesson learned - check the IIS log and look at the detailed error codes. The 404 has a subcode and the statuscode yields even more information.