Downloading files with ASP.NET using Save As Dialog

Thursday, March 29, 2007

This article explains how to download a file that is stored in the database with the possibility of forcing the browser to open a “Save As” dialog box to save the file on to the client system.

The content of the file is stored in a column of a table of image data type.

Such a dialog-box for saving a file can be displayed by using HttpContext.Current.Response property.

HttpContext is a class that encapsulates all HTTP-specific information about an individual HTTP request. The property HttpContext.Current gets the HttpContext object for the current HTTP request.

Here is an example:

1. HttpContext.Current.Response.Clear();

2. HttpContext.Current.Response.ContentType = "file";

3. HttpContext.Current.Response.AddHeader("content-disposition", "attachment;filename=" + fileName);

4. // Remove the charset from the Content-Type header.

5. HttpContext.Current.Response.Charset = "";

6. byte [] buffer = (byte[])(dsFile.Tables[0].Rows[0]["FILE_CONTENT"]);

7. HttpContext.Current.Response.BinaryWrite(buffer);

8. // End the response.

9. HttpContext.Current.Response.End();


Line 1 clears all content output from the buffer stream.

Line 2 sets the HTTP MIME type for the output stream. The default value is "text/html". As we want to store it as a file to the client system we specify it as “file”.
The AddHeader() method in Line 3 is used to add an HTTP header to the output stream. It accepts two parameters. First parameter is the name of the HTTP header to add value to and second is the value itself. In our case the parameter name is “content-disposition” and value is sent as "attachment" along with the file name. This is what that causes the opening of save file dialog. If we provide value of content-disposition as “inline” instead of save file dialog the file will be opened in the associated application.

In the Line 7 we are actually writing the contents of the file, as binary characters, to the HTTP output stream. Method BinaryWrite() is used for this purpose. This method takes a parameter buffer which is a byte array containing bytes to be written to the HTTP output stream. In our case the content of this buffer are fetched from the database into a dataset dsFile with column “FILE_CONTENT” corresponding to the bytes to be written.

The End() method in Line 9, sends all currently buffered output to the client, stops execution of the page, and raises the Application_EndRequest event. This is when we see a message asking us to save or open the file, displaying the information about the file, like file type, file name.

On clicking save, the save file dialog is opened where we can save the file to our system with the name that we have provided in the line 3 or we can provide a new name also.

How To Create a Proxy Site

So do you want to create a proxy website for yourself. Proxy websites are easy to build and they can be up and running within 2 hours or so. They are really a great source for generating revenue from your site through advertisements and referrals. Proxies can be used to access the blocked sites in your office network, school, college etc.

So what does it take to create a proxy website of your own, so that you can make huge revenue out of it. Dave Turnbull has created a short, simple walk through that takes you through the basics on how to create a proxy site:

Making a proxy is easy. Upload some files, change some graphics, and slap up some ads, and you’re done. But making a successful proxy is a whole different ball game, and what this series of articles aims to help you with. I’ve made a few proxies myself, and for a very small amount of work, I’ve made some decent revenue.

If you want to read the entire article, click here.

Output Paramaters in OLE DB Command in SSIS

Monday, March 26, 2007

Today when i was nearly at the verge of finishing my SSIS package i just found out that i really can not complete it at the moment. I am having lot of validations on the input data, lot of look ups to get the reference values and after that i need to insert the rows into a SQL server database. It is not simple insertion, there is a big logic behind inserting rows, checking if the rows containing individual numbers form a sequence so that those can be inserted into a range rather than individual entries. Not only that i need to check if that number is already in the table in some range, if yes i need to discard it, redirect that row to an error output file.

So ultimately, i have to use an OLE DB command component, write my SQL in it to call the stored procedure handling the logic to insert in the database. I was able to map the input columns of the stored procedure with my source input columns without any difficulty. But the problem was how to know which row already existed and how to redirect that row to error output with proper custom message. Well i just did it by hit and trial by having two output parameters in my stored procedure, added two columns to my input and mapped them with the out parameters in the OLE DB command. Then the task was simple, just to have a conditional split after the OLE DB command, check if indicator (one of the OUT parameters) is TRUE, suggesting insertion failed and then copying those rows to an error file that i am maintaining with the other out parameter @Message.

If you want to log general SQL errors that may occur during the execution of the OLE DB command component you can simply configure its error outputs and log the ErrorMessage.

However i am still not able to figure out how can we use the return parameters in a stored procedure to map them using an OLE DB command.

How to prevent Cross Site Scripting Attacks

Saturday, March 24, 2007

One of the common types of security attacks on web-based systems (both intranet and internet) is cross-site scripting. It is a technique that allows hackers to perform one of the following things.
  1. Execute malicious script in a client’s web browser.
  2. Insert script, object, applet, form and embed tags.
  3. Steal web session information and authentication cookies.
  4. Access the client computer.

Scenario - Any web page that allows user to enter data in fields is susceptible.

How to defend against cross-site scripting attacks?
  1. Validate user input. Do not trust any input as valid unless proven otherwise.
  2. Do not echo back data entered by a user unless you have validated it.
  3. Do not store secret information in cookies. Secret information includes any and all data item that uniquely identifies a person, credit card number, etc. If you had to store secret information in a session cookie, encrypt the cookie.
  4. Use HttpOnly cookie option.
  5. Use the security attribute.
  6. Take advantage of ASP.NET features, such as ValidateRequest Page attribute.
  7. Use HtmlEncode and UrlEncode where appropriate.

Using Temp Tables in SSIS Package Development

Friday, March 23, 2007

Often while working in a SSIS package you will require to temporary hold your data in a staging table in one of the Data Flow Tasks and then in another task you will require to fetch data from the staging table, perform transformations and load it and delete the staging table.

It means you create a physical table in your production database to stage data. But in a production environment, you may not want to create and destroy objects in the production database and might prefer to use temp tables instead. This seems easy, in fact it is, but it requires a trick and to modify the default properties of the components. Let us see what to do in this regard.



In the figure you have two Execute SQL tasks. The Create Temp Table task executes a SQL command to create a temporary table named #tmpMyData. The Drop Temp Table task executes a SQL command to drop table #tmpMyData.

If you execute this package, you will notice that the drop portion of the package failed. The package progress tab will report the error message that the table doesn't exist. This is because both of these Execute SQL tasks do not share the same connection, rather they just share the same connection manager. Each task builds its own connection from the connection manager. So when the first task is finished, temp table is destroyed and the second task creates a new connection.

To fix this in the regular property window of the OLE DB connection there is a property RetainSameConnection that is set to "FALSE" as a default. Changing it to "TRUE" is our trick and will solve the problem.



By changing this property to "TRUE," both Execute SQL tasks will share the same connection and both will be able to use the temp table.

You can use this trick for performance in SSIS packages also in the case you are going to be performing a task requiring a connection within a loop. Otherwise, imagine how many openings and closings are going to occur during that loop.

Three Dimensions to Protect your Computer

Thursday, March 22, 2007

First - Strengthen the defense of your computer

- Install Firewalls
"Firewall" is an isolation technology to separate the internal network and the Internet. The firewall carries out some filtering when two networks communicate. It lets the data/person that you "agree" to enter your network, and also block the data/person you "do not agree" from your network. It can prevent they changes, copy, or destroys your material. To ensure the firewall get into work, you must keep it update.

- Install Anti-virus software
The key on computer virus is not "Kill" is "Prevent". You should install the Anti-virus software and start the real-time monitoring process and keep the software and the virus definition file updated. To guard against the newest virus, you should set the update process in a daily mode. Also, in every week, you should scan the computer completely for the virus.

- Guard against Spyware
Spyware is a program that is installed without the user authorization. It can get the information and send to a third party. Spyware can attached in software, executable image and break into the user computer. They are used to track the computer usage information, record the keyboard hits, or take a screen capture. To get rid from spyware, you can
- raise the security level of your browser
- install software to guard against from spyware
- verify with the official website about the software plan to install

Second - Against from attacks

- Refuse unknown software, emails and attachments
Don't download unknown software. Save all downloaded softwares into one single directory and scan it before install. Don't open any unknown email and its attachments. Many viruses are spread through by using email. Don't open unknown emails, especially those with interesting headline.

- Don't go to hacker and pornographic website
Many virus and spyware are come from these websites. If you browse this website and your computer is not secure enough, you can imagine what will happen next.

- Avoid share folders
Share folder is risky and outsider can surf around your folder freely. When you want to share folder, remember to set a password. If you are no need to share the folder any more, remove the sharing immediately. It is extremely danger to share the whole drive. If someone removes the system file, your machine may be down and cannot start up again.


Last - Keep Checking/Update

- Set different and complicate password
In Internet, there are thousand needs to use password, like e-banking, login account, email. Try to use different password for different operation, this can limit the loss if one of the passwords is broken into by someone. Avoid using meaningful password, like birthday, telephone number. You should use password with letter and number. One more thing is do not choose "Save Password" option.

- Beware of defraud
The number of defraud case in Internet is keep increasing. Build up a fake bank website, send out an email to ask for password. Before take any action, try to verify it is real or not. You can phone to bank hotline to ask, go to the bank to contact directly.

- Backup
Backup is the last step to guard against the attacks. If your computer is hacked, the operating system and softwares can be reinstalled. But the data can only be restored if you frequently make a backup.

Canonicalization : Security Attack

One of the common types of security attacks is due to canonicalization. A canonicalization error is an application vulnerability that occurs when an application parses a filename before the operating system has canonicalized it. Operating systems canonicalize filenames when processing a file to identify the absolute, physical path of the given file given a virtual or relative path.
Files can be accessed using multiple names. An example is given below. If your application uses one of the methods to validate whether the user has access to the file, an attacker could potentially use one of the other synonymous names.

DoNotTouch.txt
DoNotT~1.txt
DoNotTouch.txt.
DoNotTouch.txt::$DATA


How to minimize canonicalization errors:
  1. Validate user input to ensure that the entered file name is not a restricted file. Use regular expressions to look for specific file name embedded within the user input string.
  2. Canonicalize the file name before validation. This is the process of deriving to most simple form of the file. This is a more secure option, because the .NET framework will provide the application with the absolute name of the file. To do this, you can use System.IO.Path.GetFullPath.

New Features in Visual Basic 9.0

Wednesday, March 21, 2007

New features have been added to the language. Few of them can be listed as:

  1. Implicitly Typed Local Variables

This feature allows us to declare variables without specifying their data type. The datatypes are assigned to them based on the value assigned to these variables on the right hand side.

  1. Object and Array Initializers

The new object initializers in VB 9.0 are an expression-based form of “With” for creating complex object instances concisely. For example, we already know that the With statement simplifies access to multiple members of an object by using a member-access expression starting with a period which is evaluated as if it were preceded by the object name itself, as in

Dim MyCounty As New Country()

With MyCounty

.Name = "My County"

.Area = 555

.Population = 15432

End With

Using new object initializers above 2 statements can be clubbed together as

Dim MyCounty = New Country With { .Name = "My County ", _

.Area = 555, _

.Population = 15432 _

}

  1. Anonymous Types

VB 9.0 enables us having variables without the need to declare/define the type. All you need to be able to do is create something that looks like it and access the public fields/properties.

Some more information about anonymous types: here.

  1. Deep XML Support

LINQ to XML is a new, in-memory XML programming API designed specifically to leverage the latest .NET Framework capabilities such as the Language-Integrated Query framework. Just as query comprehensions add familiar, convenient syntax over the underlying standard .NET Framework query operators, Visual Basic 9.0 provides deep support for LINQ to XML through XML literals and XML properties.

For detailed explanation check the msdn link at the bottom of this post.

  1. Query Comprehensions

SQL like queries can now be used to perform SQL like operations using operators like Select, Order By, Where etc to get the desired data/result form collections. For this purpose query expression is used which is somewhat similar to SQL syntax, but due to some clashes with VB syntax some differences exist which should be learned.

  1. Extension Methods and Lambda Expressions

Much of the underlying power of the .NET Framework standard query infrastructure comes from extension methods and lambda expressions.

Extension methods are shared methods marked with custom attributes that allow them to be invoked with instance-method syntax. Most extension methods have similar signatures. The first argument is the instance against which method is applied, and the second argument is the predicate to apply.

Some more details about lambda expressions can be gathered form here.

  1. Nullable Types

The nullable values from the relational databases used to be inconsistent with the data types in .NET. Now we can declare different data types as nullable to overcome this inconsistency.

  1. Relaxed Delegates

In Visual Basic 9.0, binding to delegates is relaxed to be consistent with method invocation. That is, if it is possible to invoke a function or subroutine with actual arguments that exactly match the formal-parameter and return types of a delegate, we can bind that function or subroutine to the delegate. In other words, delegate binding and definition will follow the same overload-resolution logic that method invocation follows.

To have a detailed look at each of the features mentioned above, read more.

Sending Mails in .NET framework 2.0 : new namespace System.Net.Mail

Tuesday, March 13, 2007

If you have used System.Web.Mail namespace in .NET 1.x for sending emails programmatically, expect a surprise. All classes within this namespace have been deprecated in favor of the new System.Net.Mail namespace. System.Net.Mail contains classes such as MailMessage, Attachment, MailAddress, SmtpClient, etc to help us send emails in the 2.0 world. The features provided by this namespace, in a nutshell, are given below.

  • MailMessage is the main class that represents an email message.
  • Use MailAddress class to represent the sender and each recipient.
  • Use SmtpClient class to connect to the SMTP server and send the email, both synchronously and asynchronously.
  • Use AlternateView class to create the email content in alternate formats, say one in HTML and other plain text, to support different recipient types.
  • Use LinkedResource class to associate an image with the email content.
  • SmtpPermission class and SmtpPermissionAttribute can be used for code access security.

To know more about some of these classes, read this article.

Code Analysis in Visual Studio 2005 Team Suite

Friday, March 9, 2007

If you are using VS.NET 2005 Team Suite, code analysis is built into the IDE itself. In older version of VS.NET, you might have used FxCop externally for comparing against pre-defined rules or would have integrated with the IDE by adding FxCop as an Add-In.

To enable code analysis, open project properties, navigate to Code Analysis tab, select “Enable Code Analysis” and choose the different rules or categories that you want to run. When you do so, code that does not conform to these rules would be reported during build as warnings. Based on project needs, you can customize some of them to report Errors instead of warnings. This level of granular control helps in strict conformance to rules. For example, any violation of a design rule can be considered as a bad practice and hence should be configured to throw an Error rather than just a warning.

Microsoft ends JPEG ...Going to HD Format

March 08, 2007 (IDG News Service) -- Microsoft Corp. will soon submit to an international standards organization a new photo format that offers higher-quality images with better compression, the company said today.

The format, HD Photo -- recently renamed from Windows Media Photo -- is taking aim at the JPEG format, a 15-year-old technology widely used in digital cameras and image applications.

Both formats take images and use compression to make the file sizes smaller so more photos can fit on a memory card. During compression, however, the quality of the photo tends to degrade.

Microsoft said HD Photo's lightweight algorithm causes less damage to photos during compression, with higher-quality images that are half the size of a JPEG.

Read More

Using View State in Server controls

Wednesday, March 7, 2007

Using View State in Server controls


View State is serialized and deserialized on the server. To reduce CPU cycles, reduce the amount of view state your application uses. Disable view state if you don’t need it. Disable view state if you are doing at least one of the following:
· Displaying a read-only page where there is no user input.
· Displaying a page that does not post back to the server.
· Rebuilding server controls on each post back without checking the postback data.


As As the view state grows larger, it affects performance in the following ways.
· Increased CPU cycles are needed to serialize and deserialize the view state content.
· Pages take longer to download because they are larger.
· Very large view state can impact the efficiency of garbage collection.

Alpha Geek: Copy TV shows to your iPod

Tuesday, March 6, 2007

So Apple wants you to pony up $1.99 per episode of Heroes when you're already paying the cable company for it? Nuh-uh. Don't think so. Seems like you should be able to copy that show--or any other--from your media center PC or TiVo right to your iPod. You can, and it's easier than you might think. (Easier, even, than copying DVDs.)



read more | digg story

Common Table Expressions in SQL Server 2005 (CTE)

Common Table Expressions, CTE in short, is a new feature in SQL Server 2005. CTE is a temporary result set and is defined as part of a SELECT, INSERT, UPDATE, DELETE and CREATE VIEW statements. A very simple usage of a CTE is given below.

WITH MyCTE( ListPrice, SellPrice) AS
(
SELECT ListPrice, ListPrice * .95 FROM Production.Product
)
SELECT
* FROM MyCTE WHERE SellPrice > 100


A CTE definition requires three things, viz, a name for the CTE (MyCTE in the above example), an optional list of columns (ListPrice and SellPrice) and the query following the AS keyword.


Using CTE could improve readability when used in complex queries involving several tables. It would be a good replacement in cases where you are using a temporary table just once after creation. The advantages of using CTE are given below.

  • Create a recursive query.
  • Substitute for a view when the general use of a view is not required; that is, you do not have to store the definition in metadata.
  • Enable grouping by a column that is derived from a scalar subselect, or a function that is either not deterministic or has external access.
  • Reference the resulting table multiple times in the same statement.

To learn more about the capabilities and limitations of CTE, visit the MSDN article MSDN site.

Partial Classes in .NET framework 2.0

.NET Framework 2.0 introduces the concept of Partial classes. Partial classes allow you to split a class definition across multiple source files. Separation of class definition facilitates multiple programmers to work on the same class simultaneously and/or better organization of code within a class. VS.NET 2005 uses this concept to hide designer-generated code when you create Windows Forms.

To create a partial class, add the “partial” keyword to the class definition. To learn more about partial classes, read the MSDN library article mentioned below or visit this link. Though the links point to C#, partial classes are available in Visual Basic also.

Isolated Storage in .NET framework

Monday, March 5, 2007

Isolated storage is a private file system managed by the .NET Framework. Like the standard file system, you can use familiar techniques (such as StreamReader and StreamWriter) to read and write files. However, isolated storage requires your code to use lesser privileges, making it useful for implementing least privilege. Additionally, isolated storage is private, and isolated by user, domain, and assembly.

When to use isolated storage:

Isolated storage is not always the best solution for storing persistent data. Isolated storage should not be used to store configuration and deployment settings, which administrators control. It is a good way to store user preferences, however - because administrators do not control them, they are not considered to be configuration settings.

If you require high encryption for your data, you can use isolated storage; but don’t rely on its security. Encrypt the data before writing it to isolated storage. Isolated storage should not be used to store high-value secrets, such as unencrypted keys or passwords, because isolated storage is not protected from highly trusted code, unmanaged code, or trusted users of the computer.

You can look up the System.IO.IsolatedStorage namespace to learn more about how you can leverage the feature in your .NET applications.

Keywords Planning : Plan your content wih keywords

Sunday, March 4, 2007

While optimizing your website with good keywords is an important part of your search engine strategy, I do think however, that too many webmasters spend way too much time tweaking it to death. I don’t think that this is a good idea nor do I think that it is beneficial to their website. All that time spent on one thing while neglecting the rest of their marketing strategies in the long run is hurting their online business. So much time is wasted getting those keywords just right actually hurts the quality of the content on their web pages.

I know that most of us are taught to find a main keyword and build your website around that keyword. I know that’s what I did in the beginning. I think that’s a big mistake because it takes away from the quality of the content on the website because too much focus is put on the keyword trying to fit it in to get that web page optimized for the search engines. I’ve seen so many websites where you can tell that the website was built around specific keywords because so much of the content is really hard too understand and that it doesn’t make much sense at all. You can actually pick out the keywords because they are used so many times. While you need to use your keywords throughout your content, you don’t need to overkill them. Using your keywords too often will actually hurt you with the search engines more than it will help you.

There’s a better way to optimize your website without hurting your content. I have found that the best way to optimize a web page is to use just one keyword that is super targeted to the content on that web page. Use the best keyword that you can possibly find and put it aside for the moment. Using a good word processor go and write the content for your web pages. Forget about using your keyword or writing any html tags altogether until you have finished writing the content. When you have finished writing your content, read through it to make sure that it makes sense. Also check to see if your content doesn’t already have a keyword that you may have already written unintentionally that may be better than the one that you have already chosen. I have found many great keywords by going through this process. It is advantageous to check to see if there are any hidden gems sitting there in the already written content.

If there is not a better keyword within your content, you can now go back and start inserting your keyword in the body your web page. The main objective when doing this is that your keyword blends in with the content in a way that it makes sense. You will probably have to make some changes so that it does blend in, and that it does makes sense. Of course you will need to put your keyword in the normal places. It should appear in the title, description Meta tag, keyword Meta tag, heading, and the body. The body is where the keyword is most abused by webmasters. While the keyword needs to appear throughout the body, it doesn’t need to be there hundreds of times. Using your keyword two to three percent of the time within the body is more than sufficient. Repeat the above process for all of your web pages. You will find that you will have a well optimized website that makes perfect sense. The last and probably the most important thing that you need to do when you are finished is to forget about doing any more optimizing. It takes time to see if your chosen keyword will be of benefit or not. If you use a super targeted keyword it will be. Forget about it and go focus on the things that you need to get done with your online business that you may have been neglecting.


About The Author
Brian Queenan is the owner of http://the-truth-about-internet-marketing.com. Learn what it really takes to market online.

Followers

 

2009 ·Techy Freak by TNB