October 19, 2010

CDO.Message.1 error 80070005 Access is Denied

Posted in ASP, How-to, IIS at 10:46 pm by Ben

Okay, clearly we’re talking about some outdated stuff here, but we all have to work on outdated stuff sometimes, right?  I ran across this error trying to send mail from an ASP page using the CDO.Message object, and pretty much every Google result about how to fix it (forum posts all, not official answers) was wrong in one way or another.  The information below is for Windows 2003 running IIS6, including (and this is important) Small Business Server.

One answer I saw was to give IUSR write permission on C:\inetpub\mailroot\pickup, which is close (though less so if you’re running SBS or any other machine which doubles as an Exchange server).  Another, which did work for me, but I’d be damned if I was going to obliterate my security that way, was to change the website’s app pool from DefaultAppPool to ExchangeApplicationPool.  No, I won’t run my public-facing web site under an application pool that runs as Local System, thanks!

The answer was to set permissions on the mail pickup directory - though IUSR is not necessarily the account that needs the permission - and if you’re running SBS (or Exchange generally), it’s not going to be at the above location, it’s going to be at %Program Files%\Exchsrvr\Mailroot\vsi 1\PickUp.  The user who needs permission is not the account set for the site (often IUSR, though if you’re using security, it may be the logged-in user), but the account for the app pool’s identity - in my case Network Service.  This is why changing the site’s app pool to one running as Local System (that is to say, one with complete access to absolutely everything) worked, but isn’t actually something you’d want to do.

October 18, 2010

Using Multiple UIWebView Objects

Posted in Code Snippets, iOS at 12:41 pm by Ben

If you’ve ever tried to put multiple UIWebView objects on screen and load documents into all of them at once, you may have stumbled across an undocumented limitation of the iPhone SDK: the loadRequest method on UIWebView is not re-entrant, and you’re often left with one or more of the views showing an error icon.  Fortunately, it’s very easy to leverage the UIWebViewDelegate protocol to resolve this issue, by ensuring that only one of the views is loading at a time.

Create a class that implements the UIWebViewDelegate protocol and includes a class-level UIWebView* variable named spinlock, then implement the following methods:

-(void)spinlockedWebViewLoad:(NSArray*)arr {
  NSAutoreleasePool* pool=[[NSAutoreleasePool alloc] init];
  UIWebView* webView=(UIWebView*)[arr objectAtIndex:0];
  NSURLRequest* request=(NSURLRequest*)[arr objectAtIndex:1];
  @synchronized(self) {
    [webView loadRequest:request];
    while (spinlock==webView) [NSThread sleepForTimeInterval:0.1];
  [pool drain];
-(BOOL)webView:(UIWebView*)webView shouldStartLoadWithRequest:(NSURLRequest*)request navigationType:(UIWebViewNavigationType)navigationType {
  if (spinlock!=webView) {
    [NSThread detachNewThreadSelector:@selector(spinlockedWebViewLoad:) toTarget:self withObject:[NSArray arrayWithObjects:webView,request,nil]];
    return NO;
  return YES;
-(void)webViewDidFinishLoad:(UIWebView*)webView {
-(void)webview:(UIWebView*)webView didFailLoadWithError:(NSError*)error {

Set your multiple web views to use the same instance of this object as their delegate before loading.  (It’s often easiest to make the object that calls them be this delegate.)  Here’s how it works:

Our own method spinlockedWebViewLoad will run on an alternate thread for each attempted load request coming from any UIWebView with this as its delegate.  We use the class-level variable spinlock to track which UIWebView is currently allowed to perform a load operation (this is used in the next method).  By synchronizing, we allow only one thread at a time to hold this spinlock, and it remains in the while loop inside the synchronized block until some other thread unsets the spinlock.

Any attempt to load, whether from your code or from the user clicking a link on the page already in the view, will ask your code whether it should load by calling the delegate method shouldStartLoadWithRequest.  In our implementation of that method, we check whether the UIWebView that wants to load is the one (the only one) currently “allowed” to load, by checking whether it’s the one that spinlock is currently set to.  If not, we tell it no, it’s not allowed to proceed with the load request, but we kick off a new thread on the above method, essentially requeueing that same request.  Of course, if the UIWebView is the one that has the spinlock, it returns yes so that the load may proceed.

When the load either finishes or fails, we release the spinlock.  The thread which is currently blocked by the while loop unblocks, exits the synchronized block, and one of the other threads waiting to enter the synchronized block (if any) is allowed to do so.

Basically, spinlockedWebViewLoad is traffic control, the delegate method shouldStartLoadWithRequest ensures that all requests do go through traffic control (as long as the UIWebView has its delegate set), and webViewDidFinishLoad and didFailLoadWithError signal traffic control that one load is done and the next can begin.

The only functionality that you may lose (and you can add more code to effectively regain it) is if you need to use these delegate methods for other functionality as well.  It’s generally still doable by adding the code to these methods, but if you need to operate on the navigationType parameter of shouldStartLoadWithRequest, it can be tricky because by the time we allow the load to proceed, the type is always “other” – navigation kicked off by our code, not a user click.  If you absolutely need this info when loading, I recommend trying to record the navigation type in an NSDictionary (indexed by the UIWebView in which it occurs) when the load is blocked, on the assumption that that’s the “real” navigation, and pull it back out when loading is allowed.  But this occurs far less often than the larger issue of multiple web views, so I’ll leave the actual code as an exercise to the reader.

July 15, 2010

SQL CLR Primer: Part 6

Posted in .Net, How-to, SQL at 11:22 am by Ben

The last CLR object left to demonstrate (check out parts 1-5 if you haven’t seen them) is the aggregate function.  This is another one that you can’t do without CLR, and if you’ve ever gotten user requests where they want, for example, a delimited list of values in a single row, you have some idea how useful this can be.

I assume you’re already familiar with SQL Server’s built-in aggregate functions like SUM and AVG, and how they work with a GROUP BY clause to produce results.  The crux of it is basically many values enter, one value leaves.  Create a new Aggregate in a SQL CLR project in Visual Studio, and you’ll see a pretty hefty template appear in your editor.  The struct that defines an aggregate has four functions, all of which must be defined.

The Init function performs setup and initialization of the object.  The database engine always calls Init first.  Because the database engine may reuse a single object when performing multiple aggregations, rather than instantiating a new one, be sure to initialize all your variables, etc. in this function, not just when they’re declared.

Accumulate will be called once for each value that goes into the aggregation function.  Note that this may be zero, one, or many, many times, depending on how the rows are grouped in the SQL query that calls your aggregate.

Merge is kind of a funky one.  To optimize processing speed, the database engine may create multiple instances of the object to work on a large number of rows simultaneously.  This function takes those multiple instances and puts them together.  Often what goes on in this method is very imilar to what goes on in the Accumulate method, but it will operate on a property of the input object instead of the input value itself.  If you’re doing anything very complicated, though, the operation of this method could involve some careful planning.

Finally, Terminate is called, and its only job is to return the result.

You can also define whatever properties you need to perform the operation.  Typically this will at least involve an accumulator, but you may need additional variables if your function is complicated.

So, the basic operation of your aggregate will be: 1) Init 2) Accumulate, zero or more times 3) Terminate.  However, what may happen is: 1) Init more than one instance 2) Accumulate 3) Call merge on any one instance with any other as a parameter; repeat until only one remains 4) Terminate.

I feel as though once it’s broken down, it’s not so terribly difficult, and here’s an example to help out further.  This aggregate accepts strings and concatenates them with a newline delimiter (useful to produce a list of names for report output, for example).

public struct Aggregate1
  private SqlString accum;
  public void Init()
    accum = “”;
  public void Accumulate(SqlString Value)
  public void Merge(Aggregate1 Group)
  public SqlString Terminate()
    return accum;

You’ll note I used a SqlString as my accumulator.  This is because in my SqlUserDefinedAggregate definition, I made it “Format.Native”.  This is by far the easiest to use, but it limits your struct’s properties to SQL types.  This is due to how the database engine interfaces with .Net to manage these objects.  Using a StringBuilder would be vastly preferable, but I’d then be required to implement IBinarySerialize and serialize and deserialize the StringBuilder myself in Read and Write methods.  Since it is just one object with an obvious representation, that’s not too difficult, but bear in mind it could get crazy if you have a lot of properties to keep track of.  Here’s how to do it.  Note “Format.UserDefined” and “MaxByteSize” which can be 1 to 8000, or use -1 to indicate it may be longer than 8000.  Not pictured here, you’ll also need to import System.IO for the IBinarySerialize interface (and in this case System.Text for StringBuilder).

public struct Aggregate1:IBinarySerialize
  private StringBuilder accum;
  public void Init()
    accum = new StringBuilder();
  public void Accumulate(SqlString Value)
  public void Merge(Aggregate1 Group)
  public SqlString Terminate()
    return accum.ToString();
  public void Read(BinaryReader r)
    accum = new StringBuilder(r.ReadString());
  public void Write(BinaryWriter w)

So there it is. Not the simplest thing ever, but not too complex, and extremely poweful.

And, if you’ve read all the posts in the series, you now know at least the basics of how to create every type of CLR object. That’s the end of what can properly be called a primer, but I may periodically make more posts in the series with tips or updates for new versions.

May 9, 2010

SQL CLR Primer: part 5

Posted in .Net, How-to, SQL at 6:43 pm by Ben

The previous posts in this series covered basic setup of your SQL Server instance to enable CLR assemblies and how to load them, as well as how to create scalar-valued UDFs, stored procedures, trigges, and user-defined types.  In this post, I’m going to demonstrate how to create a table-valued UDF using CLR.

As noted previously, start up Visual Studio and either create a new SQL Server project (the how-to for this was in part 1) or load one you created previously.  Right-click on the project in Solution Explorer and choose Add » User-Defined Function.

A major downside is that separate templates are not provided for scalar- and table-valued functions, even though their definitions are quite different.  For one thing, you need two functions.  I like to think of them as the “input” and “output” functions.  The “input” function takes as input parameters what you’ll actually pass as input parameters to the UDF in SQL.  It implements the IEnumerator interface, and the enumeration it returns will contain an object (of whateve construction you like) for each row.  The “output” function is called once for each item in the collection.  It takes the object as an input parameter and has an output parameter for each column in the output table.  SQL will collect all these together as the final output.

The SqlFunction attribute is only defined on the “input” function.  It has some required parameters, which are obvious enough, I think, that I’ll just show them first, but I will explain them also.  So here’s some sample code:

private struct MyStuff
  public int n;
  public Guid g;
  public MyStuff(int i)
    n = i;
    g = new Guid();
[Microsoft.SqlServer.Server.SqlFunction(FillRowMethodName="OutputMethod",TableDefinition="Outrow1 INT, Outrow2 UNIQUEIDENTIFIER")]
public static IEnumerator Function1(SqlInt32 x)
  for (var i = 0; i < x; i++)
    yield return new MyStuff(i);
public static void OutputMethod(object theStuff, out SqlInt32 Outrow1, out SqlGuid Outrow2)
  MyStuff ms = (MyStuff)theStuff;
  Outrow1 = ms.n;
  Outrow2 = ms.g;

Okay, so what do we have?  We have an input function named Function1, an output function named OutputMethod, and a struct named MyStuff.  It should be clear that the struct is just a way to bundle up whatever information we want to return in a single object that the enumerator can use.  I have one property for each column in my output table, but that’s just because my values are mutually exclusive.  You can do whatever boxing and unboxing is appropriate to your implementation between the functions, and you may have more or fewer properties than columns as a result.  Note that the input function implements IEnumerator, and takes SQL data types as its input.  (And note again something I referenced in an earlier post about how easily they’ve made it to use things like SqlInt32 and int interchangeably.)  The attribute on the input function defines our UDF’s return table structure in the TableDefinition property, and points the way to the output function in the “FillRowMethodName” property;  The output function is a void, as it relies entirely on its output parameters to return data.  These are also SQL data types, and the only other parameter is an object, so that it can take whatever type is inside the enumerator returned by the input function.

After loading the assembly (see part 1), you can create this function in SQL with:

CREATE FUNCTION dbo.Function1 (@x int)
RETURNS TABLE (Outrow1 int, Outrow2 uniqueidentifier) AS
EXTERNAL NAME AssemblyName.ClassName.Function1;

When you call it, SQL will call your input function – here, Function1 – with whatever parameters you provide.  It will get back an IEnumerator.  It will iterate over the items in this enumerator, and for each one, it will call your output function, named in FillRowMethodName – here, OutputMethod – with this object as the first parameter and ready SQL-typed variables for each of the remaining (output) parameters, in the order the columns are defined for your output table.  Each of those “sets” of results makes up one row of the output table.

An interesting point which the MSDN Library entry on CLR Table-Valued Functions points out is that CLR table-valued UDFs can be (and generally are) processed incrementally.  The rows are generated by the output function essentially one-by-one, as the enumeration returned by the input function is iterated over.  These results are then returned or used for further processing as they come in.  In contrast, a SQL table-valued UDF is instantiated in memory in its entirety first and only then goes on for further processing.  Obviously for functions which return very large result sets, the performance benefits here can be substantial, and it’s one of the few times when CLR is “by default” better than native SQL.

May 2, 2010

SQL CLR Primer: part 4

Posted in .Net, How-to, SQL at 12:09 pm by Ben

In this series’ previous posts, I’ve shown how to create CLR scalar-valued UDFs, stored procedures, and triggers, as well as some necessary supporting functions like loading your CLR code and setting permissions.  Today, I’m going to show you how to create a User-Defined Type – one of the objects which can only be created using CLR code, not with T-SQL.

Start Visual Studio and create a new SQL Server Project (if you’re just joining, I showed you how in part 1) or open one you saved previously.  Right-click on the project in Solution Explorer and choose Add » User-Defined Type.  You’ll see that the user-defined type is a struct, not a class (the differences are subtle, and if you’re not familiar with them, don’t worry about it for now), and the Microsoft.SqlServer.Server… attribute is defined on the struct, rather than a method inside the class as it was for an SP, UDF, or trigger.

A number of important methods and properties are in the template for you to set up:

ToString is an important method, which in addition to being called explicitly, is called implicitly when your type is cast to a char, varchar, nchar, or nvarchar.  If we use the included geography and geometry types (which are, in fact, CLR types) as a point of reference, it can be a good idea to create your ToString method so that its output can go as input to the Parse method (below) and recreate the object in its entirety.  Though personally I feel this could be overkill for very large or highly complex types.

Parse takes a string as input and creates an object of this type with many of its property values already set based on the contents of that input.  The mirror image of ToString above, it is called not only explicitly, but implicity when a character type is cast to your type.

Because one of the differences between a class and struct is that a struct instance cannot be null, you’ll have to define a special null value for your type.  If you’re familiar with interfacing with SQL from CLR code, you’re already aware that null database values show up as DBNull.Value, not simply null.  Same idea.  The static method Null returns a null instance of your type, and the IsNull read-only property returns true if the instance is null.  You can see that the template creates a private property to track whether the instance is null, but this is not the only way to go.  This is your type, and the implementation – in fact, the very meaning of null in your type’s context – is up to you.

Finally, there are place-holder methods and fields in the template to show you some basics.  As with a typical CLR struct, you can create methods either as static methods or instance methods.  You can also see that the static method returns a SQL data type, while the instance method returns a CLR data type.  This is not a requirement; you can return SQL or CLR types from any method.

You load your type much the same way as the other classes of CLR objects.  After loading your assembly (as shown in part 1), use the CREATE TYPE command to create it in SQL:

EXTERNAL NAME [AssemblyName].[StructName];

That’s all there is to it.  You can now declare columns or variables as your type, and access any properties or methods.  I’ll also note SQL’s syntactical conventions here, though.  Instance properties and methods are called as you expect, using dot notation:

PRINT @x.StringProperty;
PRINT @x.StringMethod();

Static properties and methods, however, are delimited with two colons:

PRINT MyType::StaticProperty;
PRINT MyType::StaticMethod();

And that’s about it!  Well, not nearly, but it’s enough to get you started.

April 24, 2010

SQL CLR Primer: part 3

Posted in .Net, How-to, SQL at 10:51 pm by Ben

In parts 1 and 2 of this series, I showed you how to create a CLR scalar-valued UDF and a CLR stored procedure, as well as how to enable CLR on your SQL Server instance, load your CLR objects into your database, and access your data from CLR using the calling context.  Phew!  In this installment, I want to demonstrate CLR triggers, and explain how to set the requisite permissions for assemblies that need to access external data.

As last time, you can launch Visual Studio and create a new SQL Server Project or load one you’ve saved previously.  Right click on the project in Solution Explorer and choose Add » Trigger.  One of the first things you may notice in this template relative to the previous two (although it starts commented out) is the difference in the attribute attached to the method.  While the UDF and SP only required that the attribute be declared on the method, the CLR trigger attribute has certain properties which are required.  Note that the other two do have properties, but none are required, and since we’re starting simply we’re leaving them out for now.

The Name property, for example, is present on all three of these, and although the trigger template includes it, it is not required for any of them.  Feel free to remove it if you’d like to continue to keep things simple.  The Target and Event properties, however, are required for triggers.  For target, specify the name of the table or view on which the trigger will be applied.  For event, specify the DML action (or actions) which activate it.  The syntax for Event is exactly the same as it is in a CREATE TRIGGER statement.  “FOR”, “AFTER”, or “INSTEAD OF” followed by “INSERT”, “DELETE”,  and/or “UPDATE”, with commas between them if you specify more than one.  Note that one thing all of this means is that you’ll have to recompile your code if, for example, a table name changes.  And if you have two similarly structured tables that need two identical triggers, you’ll have to do your copy/pasting here.  You won’t be able to take the shortcut of loading a single method and using two CREATE TRIGGER AS EXTERNAL NAME statements which specify the same function name.

So without further ado, here is an example insert trigger which takes the inserted values, bundled up as XML, and beams them to a remote server as an HTTP POST request:

[Microsoft.SqlServer.Server.SqlTrigger (Target="MyTable", Event="FOR INSERT")]
public static void MyTrigger()
  using (SqlConnection conn = new SqlConnection(“context connection=true”))
    SqlCommand comm = new SqlCommand(“SELECT * FROM inserted FOR XML RAW;”, conn);
    object result = comm.ExecuteScalar();
    if (result != null)
      HttpWebRequest x = (HttpWebRequest)HttpWebRequest.Create(“http://example.com/receiver.aspx&#8221;);
      x.Method = “POST”;
      StreamWriter sw = x.GetRequestStream();
      HttpWebResponse y = x.GetResponse();
      if (y.StatusCode != HttpStatusCode.OK) throw new Exception(“HTTP RESPONSE ” + y.StatusCode.ToString());

As we saw last time, we’re using the connection string “context connection=true” in order to have our SQL executing in the calling context.  This is extra important here, since – as we can with a standard SQL trigger – we’re using the inserted special table to access the rows that are being inserted.  We use the SQL FOR XML clause in our SELECT statement to bundle everything up as a single XML string (easier than doing it ourselves), which we then write into our HTTP request.

You can compile this and load it into your database using the methods noted previously.  However, because it accesses external data with the HTTP connection, you will need to load it with the External Access permission level.  (Unsafe will work, too, but is less restrictive than what’s required for it to function.)  I mentioned in passing in part 1 that this is only allowed if the database is set with the TRUSTWORTHY option off and if the database owner has the appropriate permissions.  The trustworthy option is read-only in the SSMS GUI.  To set it, you’ll need to issue the SQL command:


Easy enough.  If your SQL deployment is very simple, you probably have the sa account as the owner of the database, in which case you’re all set, because sa has the requisite permissions.  (A good thing, since permissions can’t be granted to sa.)  If another user is the database owner, you can grant the permission with the following command:

USE master;

Or if the assembly were one that requires the Unsafe permission level:

USE master;

Note that the master database must be the calling context for the GRANT commands.  Once that’s been done, you can load the assembly exactly as I’ve shown previously, and then create the trigger that calls it:

AS EXTERNAL NAME AssemblyName.ClassName.MyTrigger

(Unlike previous examples, I’ve used the method name from my example instead of “MethodName”, and similarly for the table name and DML commands.)

So, with the exception of the requirement that you specify the target table and DML commands in the CLR code, you can see that a trigger is not much more complex than a scalar UDF or a stored procedure.  It can access the inserted and deleted tables exactly the same way a SQL trigger can.  And you now know what settings need to be in place for the three different CLR assembly permission levels.   A quick recap of what those levels are…  “Safe” can only use managed code and cannot touch any data external to the calling database.  “External access” can only use managed code, but can access data outside the database (other databases, file system, network, Internet, whatever).  “Unsafe” can do anything, including executing unmanaged code.

Tune in next weekend for another CLR object (we’re getting into the crazy ones now!) and some more supporting info.

April 21, 2010

.Net 4 Numerics

Posted in .Net at 10:48 pm by Ben

One of the many new features in .Net Framework v4.0 is the System.Numerics namespace.  This contains two classes whose applicability may be somewhat limited, but they’ll be incredibly useful when you do need them.

The BigInteger class can contain and operate on an arbitrarily large integer value.  It’s immutable, like String, which means multiple operations on very large values could get expensive in terms of creating and destroying them.  But just having the ability to work with such large values without jumping through hoops can be extremely useful.  The class has methods which include most of the functions available to the other integer types, as well as many from the Math class.  I can certainly imagine this class being useful for custom hashing or encryption functions, which often involve modular arithmetic on very large numbers.  And I can even think of tweaking code I wrote a while back that needs to do bitwise operations on more than 64 bits at a time to use this instead of arrays of smaller integers.

The other class in the namespace is Complex, which can store and operate upon complex numbers.  These are stored using two Double values – for the real and imaginary parts – and again, the class contains many methods for performing functions on these numbers.  Arithemetic, powers and logarithms, trigonometry, as well as complex-specific functions like conjugation and converting between real/imaginary and magnitude/phase representations.  Again, this is something anyone developing a science or engineering platform probably could’ve coded, but of course the point of a framework is you don’t have to do this sort of dirty work.

This is just one piece of the new framework that caught my eye as being extremely convenient.  I expect to have a lot more to share as I continue using it (and have more time to post – it’s been a busy week!)

April 18, 2010

SQL CLR Primer: part 2

Posted in .Net, How-to, SQL at 8:04 pm by Ben

In part 1 of this series, I showed you how to enable CLR on your server and create and load a simple scalar-valued user-defined function.  In part 2, I’m going to show you how to access the database from your CLR code and write a stored procedure.

Go ahead and launch Visual Studio, and either create a new SQL Server Project as described in part 1, or load one you have saved from earlier.  Right click on the project in Solution Explorer and choose Add » Stored Procedure.  You can see the primary difference between a UDF and a stored procedure immediately in the template: the stored procedure method is declared void (in VB, it’s a Sub instead of a Function) – it returns no value.  To create a stored procedure which accepts parameters, simply add those parameters to the method.  As noted in part 1, they should be of types that are defined in SqlTypes namespace.

Of course, a stored procedure is (generally, though less so with CLR) not much use unless it has some impact on the database.  You can create a database connection in a CLR routine exactly as in any .Net code, using the SqlConnection class.  Most often, you’ll want to connect to the database in which the procedure is running, as the user running it, and even as part of the same session and transaction.  You can achieve this using the special connection string context connection=true”.  You can create other connections as usual, though those will usually require elevated permissions.

None of this is terribly complex, but it is a little abstract, so here’s some absurdly simple sample code to get you going:

public static void RecordLaterTime(SqlInt32 hours)
    using (SqlConnection conn = new SqlConnection(“context connection=true”))
        SqlCommand comm = new SqlCommand(“INSERT WhatTimeWillItBe VALUES (@time);”, conn);
        comm.Parameters.Add(“time”, SqlDbType.DateTime);
        comm.Parameters["time"].Value = DateTime.Now.AddHours(hours.Value);

This stored procedure takes an int as a parameter.  It then inserts into a value into a table named WhatTimeWillItBe (which has a single datetime column) a value equal to the current date/time plus the number of hours passed in the parameter.  Clearly this is very simple, and could be done in T-SQL without resort to CLR code, but it demonstrates how the various pieces fit together.

Loading your assembly into the database uses exactly the same CREATE ASSEMBLY call as described in part 1 for a UDF (or for any CLR object).  Once the assembly is loaded, create your stored procedure in SQL as follows:

CREATE PROCEDURE [FunctionName] (Parameters)
AS EXTERNAL NAME AssemblyName.ClassName.MethodName;

But I’m sure that by now you could have guessed that.

This part in the series was quick and easy, but now you know how to create CLR stored procedures as well as methods, and you know how to access the calling database from your CLR code.  Next week I’ll show you another CLR object type and how to set permissions for access levels other than Safe so you can get some real power into your CLR code!

April 16, 2010

Indexes and Unicode

Posted in .Net, SQL at 6:02 pm by Ben

I had a real WTF moment today that I want to share with any SQL developers out there (or other developers whose code talks to SQL Server), in hopes of saving you from falling into this trap, because it’s a doozy.  First, tell me, how different do these queries look to you?


I think most of you would answer the way I did: “Not too different.  ‘ABCDE’ doesn’t contain any Unicode characters, so this shouldn’t make a difference.”  Well, if MyField is declared char or varchar (non-Unicode types), and there’s an index on it, then it makes a world of difference.  Because in that case, the first query will use the index, and the second one won’t!  Obviously if MyTable has very many rows, the performance delta could be huge.

Of course no one would ever write the second query, but you certainly might write the following .Net code:

SqlCommand myCommand = new SqlCommand(“SELECT * FROM MyTable WHERE MyField=@stuff;”,myConnection);
SqlDataReader myReader = myCommand.ExecuteReader();

Seems perfectly reasonable, right?  Except that you’ve effectively just written query #2, and it won’t use your index!  .Net strings are always Unicode, and the AddWithValue method will send them as a Unicode type when it builds the SQL query.  Feel free to check my work with a test setup and SQL Profiler.

The solution is to specify the parameter’s type when you’re using a parameter a situation like this:

SqlCommand myCommand = new SqlCommand(“SELECT * FROM MyTable WHERE MyField=@stuff;”,myConnection);
myCommand.Parameters["stuff"].Value = myTextBox.Text;
SqlDataReader myReader = myCommand.ExecuteReader();

I dislike excess lines of code as much as anyone, but there’s no way to specify both the type and the value in a single method call, and one extra line is probably worth it to ensure your indexes are properly utilized!

April 13, 2010

Visual Studio 2010 Launch Keynote

Posted in Uncategorized at 8:17 pm by Ben

Unfortunately I couldn’t make it to Las Vegas for the Visual Studio 2010 launch event at DevConnections, but I did watch the live stream of the keynote yesterday.  If you missed it, you can still watch it here.  It’s well worth it (except for the first 45 minutes which is just the ‘starting soon’ blurb).  Skip to the first chapter mark to watch just the presentation or slightly before that, at around the 46:43 mark to watch the cute retrospective they played beforehand.

I’m looking forward to getting my hands on the new version, as it sounds like there are a lot of excellent new features.  I’ll be sure to tell you all about them when I do!

Next page


Get every new post delivered to your Inbox.