33 Comments

  1. Anonymous

    LINQ to SQL queries aways a big CPU hog … often we end up writing our own queries with SqlCommand instead anyway.

  2. @Anonymous,

    Agreed, LINQ to SQL can definitely sometimes be heavy, we’ve often fell back to writing SQL queries by hand to simplify the query and take better advantage of indexes. Usually we do this only for the slower queries. For the rest, we found the default LINQ to SQL queries good enough (and of course hugely faster to write).

    By contrast, compiling the queries was almost always worth it for CPU savings.

  3. Nice write up man. I always fill that compiled query performed much better then the other one.

    I have one question what is the best practice to manage that Garbage collection time?

  4. Raymond

    Nice article! Sometimes it takes more time to find these than to really build the apps. Agree 100% with the comments about LINQ and EF, we too use micro ORM frameworks as the alternative.

  5. Great content! I’ve personally seen all of these things happen in production environments and fixed them so this is dead on accurate.

    I’m consulting in Irvine doing MVC at the top pay grade and am seeing this type of stuff littered throughout the code in the organization. I appreciate you writing this so I can simply link to it as a resource and share with the team.

    Another frequent problem I’ve seen is heavy use of server-side network requests, database queries, long running processes and uploads tying up threads which hog memory and kill the server. Using caching, prefetching via service and offloading to SAAS can help reduce these problems.

  6. @jalpesh, thanks.

    Re: managing GC time, I’ll write more about this in an upcoming post. The short answer is:

    1. Reduce allocation rate.
    2. Release objects before they hit Gen2 (to prevent many full collections)
    3. Reduce number of large objects (>85000 bytes)
    4. Make sure the finalizer thread does not become blocked

    Of course, the tricky thing is figuring out what problem you have when GC is high, and how to fix it. This usually requires extensive debugging (if you want to dive deep, Tess Fernandez blog is a good source of info for how to do memory debugging). We also cover some fast production memory debugging techniques in the LeanSentry Production Troubleshooting course.

  7. I find the best way to combat the LINQ to SQL issue and also to increase security is to create and use Stored Procedures (within the DB) and use LINQ to call the SPs.
    No compilation necessary that way… as far as I know.

  8. This article talk about Fix the 3 silent performance killers for ASP.NET apps Handled exceptions & Response.Redirect, HttpWebRequest throws a Web Exception for 404s, 401s, and other non-200 responses, ttpResponse.Redirect(). In a most easy and very efficient manner.

  9. Ray Causey

    Mike, I like your suggestions to help our sluggish site and will look into them.
    However, item 1.3…we use a large number of Redirect(newURL,False) calls in page loads to react to incorrect conditions (aka not logged in ), but follow it with a Response.End to kill the load. Otherwise, unless I am missing something, a simple Exit will not stop the rest of the page’s code (HTML and vb.net), including controls from being evaluated, processed, expanded, instantiated, etc. Unless the page load thread is stopped, subsequent code starts failing because the page load was exited prematurely and the objects are not setup properly, How else can I stop the page load thread after the redirect?
    Thanks

  10. Dave

    We bought a commercial source code that was LINQ heavy and horribly inefficient. Ditching LINQ and going direct (old school) with stored procedures (if you run MSSQL) got us a whopping 4-fold increase in performance. LINQ is good for small sites however and makes things easier with other DBs, but is no match for real coding on SQL Server. Alternatively, I suppose you could use LINQ to call SPs (but why add the overhead?).

  11. Nice post.

    My interest is on the Entity Framework perspective.

    EF has proven to me that it makes coding much less maintainable, and adds so much more unnecessary coding to just get some basic simple things.

    For example, to do a single query, and allow non-tech’s to see the raw SQL should any error occur in PROD, and have the query be efficient, really requires 3 extra steps,

    (1) Make a model.
    (2) Make a static compiled EF-Query so as to make it efficient.
    (3) Add extra logic to trap the raw SQL for trouble shooting.

    And in the latter case (3), EF can’t actually do this, until after it connects to the DB, even in the compiled version for that first “cold query call”, to build the raw SQL.

    Also, in practice, with many dev’s using EF, I see them build their raw SQL in SSMS, to ensure they have it correct. Then throw the SQL away, and go to EF, build models and then build lync statement to do the same as the query. This is a waste of time and not maintainable, not to mention, the EF’s slowness when running without a compiled qry, which requires extra steps to make it efficient.

    • Beefydog

      Totally correct on EF. What I’ve found is that, while some believe it to be more manageable for small scale applications, it does not scale (not to mention will ALWAYS be slower). EF troubleshooting can gobble up time like crazy. I converted an EF site as an example to get a contract to old school (let the database do the work, stupid), my preferred way and it ran twice as fast w/o cacheing and 5X faster w/cacheing, used significantly less CPU time, AND there’s no “Black Box” to have to contend with. The problem, honestly, is with devs that refuse or are afraid to learn SQL Server. It’s not that hard to write nor debug! EF has it’s place, but is only good for simple stuff. Advanced stuff requires a real knowledge of DBMSes. There’s a LOT more than just CRUD.

  12. Thanks for this information. It’s been about 14 years since I needed (or wanted) to know any of this backend stuff, but right now I’m building a website for a cookbook I just finished writing and I am trying to test it locally. So I needed to setup IIS8x and your info was very clear and concise (although I’m still going ‘huh?!’, :), but not with quite the panic/stress I had two hours ago.)

  13. I don’t know what I’m talking about. To speechless, that PERFECT TOOL, this of you LeanSentry.

    I checked in this post, since I’m experiencing problems on my server, with the help of the tool, I found even the method that is giving problems.

    PS. sorry for the bad English

  14. Dennis Jakobsen

    I should have probably read the date on this article before “fixing” my application. Now that i have reduced my application’s ThreadAbortException’s from an average of 700/sec under load to 0, i find out it’s actually slightly slower now. But hey, i’m doing the right thing 🙂 Maybe MS fixed a recent version of .NET, so these ThreadAbortException’s arent as expensive?

    • Hey Dennis,

      The overhead of exceptions has to be viewed relative to the overhead of the rest of your codepath. It’s highly likely that your application has way worse problems than TAE, or perhaps that your fix introduced additional overhead that was worse.

      However, for many high-traffic applications that are already fairly efficient, doing this will have a noticeable improvement.

      As always, profile first to identify your biggest issues and then fix them first before tacking other stuff.

      Best,
      Mike

  15. Michael

    Hi,

    I’m using appdynamics to monitor the CLR garbage collector Gc time spent and I have a lot of notification about this because the the GC time spent is more than 50%. But it’s for a short time (5 minutes). I compare the metrics between the CLR and the CPU used, nothing correlate. Is it critical ?

    Thank you

    • Hey Michael,

      Thanks for checking in. Time in GC of 50% is pretty bad, and will have a noticeable performance impact to your entire application. You should shoot for time in GC of 10% or less, I prefer under 2-5% ideally.

      Have you analyzed your memory usage and identified the negative patterns that are causing such a high GC overhead in production?

      Just FYI, you can use LeanSentry to do this fairly quickly.

      Best,
      Mike

  16. Thank you for providing such information about.net and IIS coding language, I am working as a dot net developer in a reputed organization but because of not knowing about new updates my promotion was stuck but now I can proof myself after reading your post and get good salary hike in my office.

  17. If time in GC of 50% is not too good, what about 35% time in GC? I’m asking because mine is always between 35 to 40%. Thanks for sharing this awesome article.

Leave a Reply

Your email address will not be published. Required fields are marked *