This really doesn't have anything to do with C#. This is your classic nvarchar vs varchar issue (or unicode vs ASCII). The same thing happens if you mix collations.
I'm not sure why anyone would choose varchar for a column in 2026 unless if you have some sort of ancient backwards compatibility situation.
beart 29 minutes ago [-]
I agree with your first point. I've seen this same issue crop up in several other ORMs.
As to your second point. VARCHAR uses N + 2 bytes where as NVARCHAR uses N*2 + 2 bytes for storage (at least on SQL Server). The vast majority of character fields in databases I've worked with do not need to store unicode values.
wvenable 24 minutes ago [-]
> The vast majority of character fields in databases I've worked with do not need to store unicode values.
This has not been my experience at all. Exactly the opposite, in fact. ASCII is dead.
SigmundA 53 seconds ago [-]
Vast majority of text fields I see are coded values that are perfectly fine using ascii, but I deal mostly with English language systems.
Text fields that users can type into directly especially multiline tend to need unicode but they are far fewer.
SigmundA 8 minutes ago [-]
To complicate matters SQL Server can do Nvarchar compression, but they should have just done UTF-8 long ago:
Also UTF-8 is actually just a varchar collation so you don't use nvarchar with that, lol?
_3u10 25 minutes ago [-]
Generally if it stores user input it needs to support Unicode. That said UTF-8 is probably a way better choice than UTF-16/UCS-2
SigmundA 10 minutes ago [-]
UTF-8 is a relatively new thing in MSSQL and had lots of issues initially, I agree it's better and should have been implemented in the product long ago.
I have avoided it and have not followed if the issues are fully resolved, I would hope they are.
SigmundA 3 minutes ago [-]
Yes I have run into this regardless of client language and I consider it a defect in the optimizer.
smithkl42 4 minutes ago [-]
Been bit by that before: it's not just an issue with Dapper, it can also hit you with Entity Framework.
adzm 6 minutes ago [-]
even better is Entity Framework and how it handles null strings by creating some strange predicates in SQL that end up being unable to seek into string indexes
enord 14 minutes ago [-]
This is due to utf-16, an unforgivable abomination.
jiggawatts 28 minutes ago [-]
This feels like a bug in the SQL query optimizer rather than Dapper.
It ought to be smart enough to convert a constant parameter to the target column type in a predicate constraint and then check for the availability of a covering index.
wvenable 19 minutes ago [-]
It's the optimizer caching the query plan as a parameterized query. It's not re-planning the index lookup on every execution.
SigmundA 13 minutes ago [-]
The parameter type is part of the cache identity, nvarchar and varchar would have two cache entries with possibly different plans.
7 minutes ago [-]
Rendered at 00:00:47 GMT+0000 (Coordinated Universal Time) with Vercel.
I'm not sure why anyone would choose varchar for a column in 2026 unless if you have some sort of ancient backwards compatibility situation.
As to your second point. VARCHAR uses N + 2 bytes where as NVARCHAR uses N*2 + 2 bytes for storage (at least on SQL Server). The vast majority of character fields in databases I've worked with do not need to store unicode values.
This has not been my experience at all. Exactly the opposite, in fact. ASCII is dead.
Text fields that users can type into directly especially multiline tend to need unicode but they are far fewer.
https://learn.microsoft.com/en-us/sql/relational-databases/d...
Also UTF-8 is actually just a varchar collation so you don't use nvarchar with that, lol?
I have avoided it and have not followed if the issues are fully resolved, I would hope they are.
It ought to be smart enough to convert a constant parameter to the target column type in a predicate constraint and then check for the availability of a covering index.