Home » Dev Thinking
Category Archives: Dev Thinking
Using Using Again and Again
Here’s a quick little tip that addresses a minor thing that annoys me much more than it probably warrants…
If I need multiple instances of an variable of a given type – say, of a FileStream – with different initializing values, I often end up declaring repeated variable names:
FileStream fs1 = new FileStream("FirstFile.txt",File.Open);
FileStream fs2 = new FileStream("SecondFile.txt",File.Open);
A recent real-world example is when I was working with SQL command objects; I needed to execute one command, then check the results and either run a different command if it updated any records or just return (the source includes way to much extraneous code to share for this example, but you get the idea…)
Using repeating variable names isn’t the most terrible thing in the world, but I’ve spent many wasted minutes agonizing on *just* the right names for my variables, often ending up with something bogus like “firstFile / secondFile” or “readingFile / writingFile” – accurate, but really overkill for the amount of anxiety it causes and value achieved.
Recently I realized that I have another solution, at least some of the time. If my variable type is an object that supports the Using keyword, I can simplify my cleanup/reset code, while also eliminating the redundant variable declarations at the same time! Using example code is often written like this:
using (FileStream fs = new FileStream("FirstFile.txt", FileMode.Open))
{...}
and so I’ve used this for years and never really thought about alternate ways of writing that block. But it turns out that I don’t *have* to declare the variable type inside my using statement. I can solve my redundant variable name peeve with this simple tweak:
FileStream fs;
using (fs = new FileStream("FirstFile.txt", FileMode.Open))
{...}
using (fs = new FileStream("SecondFile.txt", FileMode.Open))
{...}
There is a caveat to this, namely that if you need access to the variables at the same time in your code (say, if you’re coping from one FileStream directly to another), you may still need separate variables. But it’s nice to know this works.
Storing SQL Queries for .NET Code
This is a repost of my response to a question over on Stack Overflow. The user had posed a question about how to best convert and store large dynamic queries for Microsoft SQL Server which were being generated as inline code in a C# project.
If you’re not using Entity Framework and LINQ for your queries, then I recommend that you put the queries into SQL Stored Procedures in your database, and use ADODB.Command or SqlClient.SqlCommand objects to run them.
Depending on the complexity of how you are building the query, it might make sense to create multiple queries to replace your inline calls. In particular, if you have a block of code that filters by product OR by store OR by chain (for example), and those are all handled by one block of dynamic C# code to build the command, it would be better to have 3 separate stored procedures in SQL to handle each case than to try to duplicate that dynamic behavior in one procedure.
An added benefit to this approach is that you will probably find better overall performance and index tuning opportunities with the stored procedures due to the way SQL builds query plans.
Some general comments about how SQL manages query plans
If you build your stored procedure in your database, SQL will generate a query plan the very first time each procedure is run, using whatever parameters and flow that call requires to optimize the query. If you load your queries dynamically – whether with generated dynamic SQL or by loading SQL scripts saved as files – Then SQL runs this analysis every time the call is run.
Generating the query plan every run is a performance hit. Depending on your database and your query, this hit can be very minor – a few milliseconds on a query that runs once a day – or very major – a second or two for a query that runs thousands or millions of times a day.
Splitting your calls in to 3 separate procedures is a good idea because SQL Server builds the plan on the first run example. If you have a procedure that takes an optional ID value, and returns one row if you pass the value, or all rows if you don’t… then depending on which one gets called first, SQL is going to try to either do an index lookup or table scan every time you call it, neither of which is optimal for the other operation. Splitting this in to two separate calls allows SQL to generate an optimal query plan for each operation.
Another aspect of this is more for logging and analysis. Many SQL performance tools, including those built in to SQL, are able to look at many calls of the same stored procedure an aggregate the results to identify long term performance trends. Some tools even do a really good job of pinpointing the exact parts of the procedure which run poorly. If you’re using dynamically generated SQL, however, those calls all become a sea of separate events. So your 3 second call made millions of times a day would get lost if you have a long running stored procedure that bubbles up once or twice a day, but if the 3 second call is a stored procedure then you can see that it collectively becomes 90% of your server workload, and a good candidate for refactoring and query tuning.
So, while it feels a little bit like you’re violating the DRY principal to generate multiple similar queries as separate stored procedures, you want to adjust your mindset when working with SQL to get the best performance.
Never Uninstall
Working through some self-study using the latest version of Microsoft’s free developer tools, Visual Studio 2017 and SQL 2016 Developer, and I have been hitting some weirdness in doing simple things like applying code-first database changes to a simple project…
Turns out the culprit is artifacts left behind on my system when I (in theory) uninstalled the older versions of these tools to start with a “clean slate”. So I’m spending what amounts to a couple of days of work debugging things I shouldn’t be debugging because no developer tool uninstall from Microsoft is truly clean.
I’m starting to think I should look at a virtual machine host for my development. I did it all the time at my old company, so the interface isn’t a concern, and I could build a truly clean slate environment when I want to upgrade tools.
Who are you benefiting? The company or the customer?
Apple has a reputation for doing their own thing, often in the face of accepted assumptions. But when the changes a company makes detract from the benefits customers have grown to rely on, who benefits?
Apple’s latest iTunes update removed some of the key features of the app – managing apps on your devices, to name one – in favor of pushing their music and media services. But by eliminating this feature from the desktop version of iTunes, they have eliminated the key lock-in for using iTunes at all. There are plenty of apps – free or paid – that will use their API to sync music and movies to the your phone. But nothing really served that extra role of managing which apps you have on which devices, and keeping them all up to date with the latest versions.
Code First Database Smells (I think…)
I’ve been a database developer (among other skills) for most of my professional career. My first real code was an automated data update tool in Lotus 1-2-3 for a distributed phone directory database. I went on to cut my developer baby teeth in FoxPro, then grew up using Microsoft SQL. When I think about a new application, I think about the database early, if not actually first. So I have some biases…
I’ve used Code First principals when learning Ruby with a MongoDB database on the project, but I was too wrapped up in the newness to think about the impacts. Now I’m working through a course on MVC in ASP.Net, using Visual Studio and MS SQL, so I’m on familiar ground. And although the instructor gave reasoned arguments for why Code First is a good practice in his experience, as a database guy my first impression is that there are some smells.