Hye IT Guys,
Few days back, I was working on loading dynamic control in asp.net. I found a problem. when ever I load a control, Its event is not firing for the first time. I found a solution for this problem i.e. We need to maintain the User Control ID so that its event remain attached with it.
See the below code.
protected void Page_Load(object sender, EventArgs e)
{
string ctrlName = "Controls/ctrlLogin.ascx";
string ctrlID = "uc1";
if (IsPostBack == false)
{
int ID = Convert.ToInt32(Request.QueryString["ID"]);
}
else
{
ctrlName = (string)ViewState["ctrlName"];
if (ctrlName != null)
{
Control uc = LoadControl(ctrlName);
uc.ID = ViewState["ctrlId"].ToString();
divContent.Controls.Add(uc);
}
}
}
Any suggestion or ideas will be appreciated. Thank You
Regards,
IT Guy
Tuesday, June 9, 2009
Wednesday, June 3, 2009
Div Based Design
It guys , these days, switching over to the recommended web design stratetgy of using DIV tags over tables. CSS has many advantages over traditional tables for more complex layouts, but in this tutorial we'll be discussing the basics of DIV based web design. We'll simply lay out a common web design layout and let you tinker with it from there.
To start with, I'll display all the CSS code we're going to use to layout the page, followed by the HTML that will actually display it.
The CSS Code
body {
margin: 0px;
padding: 0px;
}
#header {
background: #438a48;
width: 100%;
}
#leftcolumn {
background: #2675a8;
float: left;
width: 25%;
height: 700px;
}
#content {
background: #000;
float: left;
width: 75%;
height: 700px;
}
#footer {
background: #df781c;
clear: both;
width: 100%;
}
The Tutorial
Let me explain the tags involved. First, we set the body of your document to have zero margins on all four sides, as well as zero padding on all four sides. This will allow your data to be flush against the edge of the page on your browser. Next, we create your header region, setting it's width to 100% of your browsers width, and giving it a background of the hex color #438a48. We are not going to give the header section a static height value, as we're going to let the content within be the judge of that. Next, we'll setup our left column. We start by giving it a background color, specifying a width of 25% of the browsers width, and a static height of 700 pixels. We're also going to "float" our div to the left. Floating allows more than one DIV tag to be on a line. This is going to come in handy for our content section. Now, see the content section. Everything should look familiar. We're going to float the content section to the left again, pushing it flush against the left column. The two width percentages of the left column and the content section should equal 100%. Our last section is the footer with the same type of tags as the rest, except the "clear" tag. Clearing a DIV tag has 4 options. Left, Right, Both, and None. When you specify a clear on a DIV it tells the browser that it cannot put another div on the left or right side of that div tag. This is a very useful specification. I encourage you to play with this CSS tag as specified by the W3C page.
After you have setup the CSS styles of your DIV's, the last step is to create your DIV's in your HTML with their respective ID tags. Your browser will do the rest!
To start with, I'll display all the CSS code we're going to use to layout the page, followed by the HTML that will actually display it.
The CSS Code
body {
margin: 0px;
padding: 0px;
}
#header {
background: #438a48;
width: 100%;
}
#leftcolumn {
background: #2675a8;
float: left;
width: 25%;
height: 700px;
}
#content {
background: #000;
float: left;
width: 75%;
height: 700px;
}
#footer {
background: #df781c;
clear: both;
width: 100%;
}
The Tutorial
Let me explain the tags involved. First, we set the body of your document to have zero margins on all four sides, as well as zero padding on all four sides. This will allow your data to be flush against the edge of the page on your browser. Next, we create your header region, setting it's width to 100% of your browsers width, and giving it a background of the hex color #438a48. We are not going to give the header section a static height value, as we're going to let the content within be the judge of that. Next, we'll setup our left column. We start by giving it a background color, specifying a width of 25% of the browsers width, and a static height of 700 pixels. We're also going to "float" our div to the left. Floating allows more than one DIV tag to be on a line. This is going to come in handy for our content section. Now, see the content section. Everything should look familiar. We're going to float the content section to the left again, pushing it flush against the left column. The two width percentages of the left column and the content section should equal 100%. Our last section is the footer with the same type of tags as the rest, except the "clear" tag. Clearing a DIV tag has 4 options. Left, Right, Both, and None. When you specify a clear on a DIV it tells the browser that it cannot put another div on the left or right side of that div tag. This is a very useful specification. I encourage you to play with this CSS tag as specified by the W3C page.
After you have setup the CSS styles of your DIV's, the last step is to create your DIV's in your HTML with their respective ID tags. Your browser will do the rest!
Thursday, May 28, 2009
How to make money
Its very informative, for earning online
http://trafficcoleman.com/blog/general-stuff/how-to-make-make-money-online/
http://trafficcoleman.com/blog/general-stuff/how-to-make-make-money-online/
Internet Marketing Startegies.
Its a very informative marketing site. Do visit it.
http://trafficcoleman.com/ams/blogroll.html
http://trafficcoleman.com/ams/blogroll.html
Thursday, May 21, 2009
Garbage Collector Internal Mechanism
In Dot Net Programming, developers don't often do garbage collection on their own, because they rely on the features provided by dot net for garbage collection. A common usage of garbage collection in dot net is when a programmer explicitly invokes GC.Collect() method. The presence of Garbage Collector frees the programmer of any worries about dangling data.Let us start with the cause which led us to the effect of garbage collection in Dot Net. We know necessity is the mother of all inventions. So what was the necessity to invent garbage collection?
For Detail Visit: Garbaga Internal Mechanism in dotnet
In Dot Net Programming, developers don't often do garbage collection on their own, because they rely on the features provided by dot net for garbage collection. A common usage of garbage collection in dot net is when a programmer explicitly invokes GC.Collect() method. The presence of Garbage Collector frees the programmer of any worries about dangling data.Let us start with the cause which led us to the effect of garbage collection in Dot Net. We know necessity is the mother of all inventions. So what was the necessity to invent garbage collection?
For Detail Visit: Garbaga Internal Mechanism in dotnet
Sunday, May 17, 2009
Fiddler
What is Fiddler?
Fiddler is a Web Debugging Proxy which logs all
For Detail: Fiddler Detail
Gmail
Gmail is the best application website i ever seen. Simple implementation, Super Ajax, Cute Chatting, Status Messages, Fast Mail Checking, Live updating and its features are endless as my wordpress database wont withstand
when you type: www.gmail.com, the following action will happen. See it is very interesting.
For Details View: Gmail Architecture
when you type: www.gmail.com, the following action will happen. See it is very interesting.
For Details View: Gmail Architecture
Monday, March 23, 2009
flv player
Flv Player
function createPlayer(theFile, arg)
{
var s = new SWFObject("Controls/mediaplayer.swf","thePlayerId","350","255","7");
s.addParam("allowfullscreen","true");
s.addVariable("file",theFile);
s.addVariable("width","350");
s.addVariable("height","255");
s.addVariable("wmode","opaque");
s.addVariable("autoscroll","true");
s.addVariable("allowscriptaccess","true");
s.addVariable("wmode","opaque");
s.addVariable("bufferlength","10");
s.addVariable("displayheight","230");
s.addVariable("allowfullscreen","true");
s.addVariable("autostart","true");
s.addVariable("shuffle","false");
s.addVariable("enablejs","true");
s.addVariable("javascriptid","thePlayerId");
s.write("divmsndbc"); }';
For Details View: flv video Playing videos with Flv Player
Tuesday, March 10, 2009
Load user data once with an HttpModule
A couple of years ago, when I was less focused, finished with my book and completely unmotivated to develop anything useful for my own non-day job projects, I struggled trying to shoehorn my apps into the ASP.NET Membership and Profile API's. Probably because of my lack of experience, I became very frustrated at the point where the objects started to relate to the data.
It occurred to me fairly recently that these systems provide a level of abstraction that makes sense in terms of data storage, but you still have to do a fair amount of work in your providers to make sure you aren't pounding the data store. That means caching, of course, but you still might be creating objects in several places (handlers, the page, user controls, etc.), and each time you're going to the well, whether it be by way of the caching mechanism you create or going to the data store.
So why cache at all? Why not get the data once, early in the request cycle, and go back to it when you need it? Oddly enough, it was good old fashioned FormsAuth that made me think of this, where in ASP.NET v1.x we would look up the user and roles, probably in an HttpModule, and access that data from where ever.
For Details View : http://www.uberasp.net/getarticle.aspx?id=51
It occurred to me fairly recently that these systems provide a level of abstraction that makes sense in terms of data storage, but you still have to do a fair amount of work in your providers to make sure you aren't pounding the data store. That means caching, of course, but you still might be creating objects in several places (handlers, the page, user controls, etc.), and each time you're going to the well, whether it be by way of the caching mechanism you create or going to the data store.
So why cache at all? Why not get the data once, early in the request cycle, and go back to it when you need it? Oddly enough, it was good old fashioned FormsAuth that made me think of this, where in ASP.NET v1.x we would look up the user and roles, probably in an HttpModule, and access that data from where ever.
For Details View : http://www.uberasp.net/getarticle.aspx?id=51
Thursday, March 5, 2009
Difference b/w union and union all
Often people forget or misunderstand the difference between UNION and the UNION ALL keywords in a query.
UNION ALL
SELECT NOW = GETDATE()
UNION ALL
SELECT NOW = GETDATE()
This gives us the entire 'set' of data, both queries are executed, note, they are executed serially and not in parallel, the order the queries are executed is not determined; so, you never get a situation where the top query is parallelised with the bottom query; the query components themselves may be executed in parallel and combined as the last step.
Output (notice there are two rows that are identical, this is because GETDATE() gives a consistent value across the entire query rather than consistency at each query within the UNION construct.
NOW-----------------------2006-06-29 08:01:48.937
-----------------------2006-06-29 08:01:48.937
For Detail View : http://sqlblogcasts.com/blogs/tonyrogerson/archive/2006/06/29/849.aspx
UNION ALL
SELECT NOW = GETDATE()
UNION ALL
SELECT NOW = GETDATE()
This gives us the entire 'set' of data, both queries are executed, note, they are executed serially and not in parallel, the order the queries are executed is not determined; so, you never get a situation where the top query is parallelised with the bottom query; the query components themselves may be executed in parallel and combined as the last step.
Output (notice there are two rows that are identical, this is because GETDATE() gives a consistent value across the entire query rather than consistency at each query within the UNION construct.
NOW-----------------------2006-06-29 08:01:48.937
-----------------------2006-06-29 08:01:48.937
For Detail View : http://sqlblogcasts.com/blogs/tonyrogerson/archive/2006/06/29/849.aspx
Wednesday, March 4, 2009
Learn CSS
This article isn't meant to give you a complete and thorough overview of CSS, but should give you a practical foundation for working with CSS in templates and learning more in the future.
Basic CSS Rules
CSS code is structured differently than HTML. You can think of CSS code as a list of rules. First you have to state what you're making the rule for (the "selector"). Then, you list out the different properties that you want to change. It's pretty simple!
selector {
property1: value;
property2: value;
}
Like with HTML, you can format CSS however you want. It can be in one long line, or split up into several lines with tabs for readability.
A selector can be an HTML element, a custom class, or a reference to an ID.
If the selector is an HTML element (such as p, h1, a, and so on), you can use it to set properties for every instance of that HTML element on the page.
The selector can also be a custom class. Custom classes can be named almost anything you want, and start with a period—.redtext,.nomargin, and.bigheading are all examples of classes. In the HTML code, you can apply these classes with the class attribute, like this:
For Details View : http://expression.microsoft.com/en-us/dd326792.aspx
Basic CSS Rules
CSS code is structured differently than HTML. You can think of CSS code as a list of rules. First you have to state what you're making the rule for (the "selector"). Then, you list out the different properties that you want to change. It's pretty simple!
selector {
property1: value;
property2: value;
}
Like with HTML, you can format CSS however you want. It can be in one long line, or split up into several lines with tabs for readability.
A selector can be an HTML element, a custom class, or a reference to an ID.
If the selector is an HTML element (such as p, h1, a, and so on), you can use it to set properties for every instance of that HTML element on the page.
The selector can also be a custom class. Custom classes can be named almost anything you want, and start with a period—.redtext,.nomargin, and.bigheading are all examples of classes. In the HTML code, you can apply these classes with the class attribute, like this:
For Details View : http://expression.microsoft.com/en-us/dd326792.aspx
Sunday, February 22, 2009
What is SEO (Search Engine Optimization)?
SEO is the process of optimizing a website by improving on site and off site aspect in order to increase the traffic, your sites receives from search engine.
SEO aim to index and improve ranking for the webpages which are most relevent to the keywords searched.
SEO will make site search engine friendly so that our potential customers can find us very easily when surfing the web.
SEO is also called as organic or natural results because we do not have to pay to search engine to get listed in search engine database and to archive high ranking on desired keywords.
View Details : SEO Detail
SEO aim to index and improve ranking for the webpages which are most relevent to the keywords searched.
SEO will make site search engine friendly so that our potential customers can find us very easily when surfing the web.
SEO is also called as organic or natural results because we do not have to pay to search engine to get listed in search engine database and to archive high ranking on desired keywords.
View Details : SEO Detail
Friday, February 20, 2009
offline Gmail
Web-based email is great because you can check it from any computer, but there's one little catch: it's inherently limited by your internet connection. From public WiFi to smartphones equipped with 3G, from mobile broadband cards to fledgling in-flight wireless on airplanes, Internet access is becoming more and more ubiquitous -- but there are still times when you can't access your webmail because of an unreliable or unavailable connection.Today we're starting to roll out an experimental feature in Gmail Labs that should help fill in those gaps: offline Gmail. So even if you're offline, you can open your web browser, go to gmail.com, and get to your mail just like you're used to.
For Details View : http://gmailblog.blogspot.com/2009/01/new-in-labs-offline-gmail.html
For Details View : http://gmailblog.blogspot.com/2009/01/new-in-labs-offline-gmail.html
Tips for Asp.net
Tip: Do not use the AutoPostBack attribute on the DropDownList control to simply redirect to another page.
There are probably cases when this might make sense, but for the most part it is overkill. Using the autopostback for a redirect requires extra roundtrip to the server. First the autopostback returns to the server and processes everything up to the event handling the postback. Next a Response.Redirect is issued which goes back to the client requesting the client use another page. So you end up with two separate requests + processing just to get a user to another page.
Using the onchange event of the select element, we can do this all on the client. In the sample below, I am simply redirecting to the current page with an updated querystring element. Your logic will vary, but in the case below, I am avoiding the zero index.
0) { window.location = window.location.pathname + '?t=' + this[this.selectedIndex].value;}" runat="Server">
Tip: Never use the ASP.Net Label control.
Ever is a strong word, but except for some quick and dirty style hacks you should never ever use this control. Any text is rendered inside a span control which is usually unnecessary and complicates any CSS styling you may be trying to use. In most cases, you can replace the Label with a Literal and achieve the same results.
Tip: Use the ASP.Net Repeater instead of DataList, DataGrid, and DataView controls
The Repeater is the single most powerful control shipped in ASP.NET. It is versatile and lightweight. There are times (especially prototyping) when the other databound controls make sense to use, but they generate a lot of extra markup and generally complicate the page with all of their events and styling. Using the Repeater, you may write a little more code up front, but you will be rewarded in the long run.
For Details View : http://simpable.com/code/quick-tips-for-asp-net-part-one/
There are probably cases when this might make sense, but for the most part it is overkill. Using the autopostback for a redirect requires extra roundtrip to the server. First the autopostback returns to the server and processes everything up to the event handling the postback. Next a Response.Redirect is issued which goes back to the client requesting the client use another page. So you end up with two separate requests + processing just to get a user to another page.
Using the onchange event of the select element, we can do this all on the client. In the sample below, I am simply redirecting to the current page with an updated querystring element. Your logic will vary, but in the case below, I am avoiding the zero index.
Tip: Never use the ASP.Net Label control.
Ever is a strong word, but except for some quick and dirty style hacks you should never ever use this control. Any text is rendered inside a span control which is usually unnecessary and complicates any CSS styling you may be trying to use. In most cases, you can replace the Label with a Literal and achieve the same results.
Tip: Use the ASP.Net Repeater instead of DataList, DataGrid, and DataView controls
The Repeater is the single most powerful control shipped in ASP.NET. It is versatile and lightweight. There are times (especially prototyping) when the other databound controls make sense to use, but they generate a lot of extra markup and generally complicate the page with all of their events and styling. Using the Repeater, you may write a little more code up front, but you will be rewarded in the long run.
For Details View : http://simpable.com/code/quick-tips-for-asp-net-part-one/
Thursday, February 19, 2009
Using the Data Access Application Block to Execute SQL Statements
Once you have the references set and the correct using or Imports statements in your class files, you will have access to the Data Access Application Blocks SqlHelper class. The SqlHelper class contains static methods that facilitate the execution of common data access tasks, including:
Calling stored procedures or SQL text commands,
Specifying parameter details, and
Returning SqlDataReader, DataSet, XmlReader objects, or single values.
In order to illustrate the advantage of using the Data Access Block, let's take a look at sample code that creates a SqlDataReader object and binds it to a DataGrid without using the Data Access Block. In general, returning a DataReader involves establishing a connection, creating a SqlCommand, and executing the command against the database. The resulting SqlDataReader object can then be bound to a DataGrid:
//create the connection string and sql to be executed
string strConnTxt = "Server=(local);Database=Northwind;Integrated Security=True;";
string strSql = "select * from products where categoryid = 1";
//create and open the connection object
SqlConnection objConn = new SqlConnection(strConnTxt);
objConn.Open();
//Create the command object
SqlCommand objCmd = new SqlCommand(strSql, objConn);
objCmd.CommandType = CommandType.Text;
//databind the datagrid by calling the ExecuteReader() method
DataGrid1.DataSource = objCmd.ExecuteReader();
DataGrid1.DataBind();
//close the connection
objConn.Close();
Now lets look at the same task using the SqlHelper class's static ExecuteReader() method:
//create the connection string and sql to be executed
string strSql = "select * from products where categoryid = 1";
string strConnTxt = "Server=(local);Database=Northwind;Integrated Security=True;";
DataGrid4.DataSource = SqlHelper.ExecuteReader(strConnTxt, CommandType.Text, strSql);
DataGrid4.DataBind();
As you can see, there is considerably less code in the second example. To execute a SQL statement and return a SqlDataReader, the ExecuteReader() method requires only the connection string, command type and SQL to be executed. The SqlHelper class contains all of the "plumbing" necessary to establish a connection, create a SqlCommand and execute the command against the database with a single static method call.
The main advantage of the Application Blocks is that they greatly reduce the amount of code you need to write by encapsulating common tasks in a wrapper class. While at first glance this may not seem that profound of a benefit, realize that writing less code means more than just shorter time needed to write the code. It also means fewer bugs and typos, and an overall lower total cost to produce the software.
Happy coding :)
Calling stored procedures or SQL text commands,
Specifying parameter details, and
Returning SqlDataReader, DataSet, XmlReader objects, or single values.
In order to illustrate the advantage of using the Data Access Block, let's take a look at sample code that creates a SqlDataReader object and binds it to a DataGrid without using the Data Access Block. In general, returning a DataReader involves establishing a connection, creating a SqlCommand, and executing the command against the database. The resulting SqlDataReader object can then be bound to a DataGrid:
//create the connection string and sql to be executed
string strConnTxt = "Server=(local);Database=Northwind;Integrated Security=True;";
string strSql = "select * from products where categoryid = 1";
//create and open the connection object
SqlConnection objConn = new SqlConnection(strConnTxt);
objConn.Open();
//Create the command object
SqlCommand objCmd = new SqlCommand(strSql, objConn);
objCmd.CommandType = CommandType.Text;
//databind the datagrid by calling the ExecuteReader() method
DataGrid1.DataSource = objCmd.ExecuteReader();
DataGrid1.DataBind();
//close the connection
objConn.Close();
Now lets look at the same task using the SqlHelper class's static ExecuteReader() method:
//create the connection string and sql to be executed
string strSql = "select * from products where categoryid = 1";
string strConnTxt = "Server=(local);Database=Northwind;Integrated Security=True;";
DataGrid4.DataSource = SqlHelper.ExecuteReader(strConnTxt, CommandType.Text, strSql);
DataGrid4.DataBind();
As you can see, there is considerably less code in the second example. To execute a SQL statement and return a SqlDataReader, the ExecuteReader() method requires only the connection string, command type and SQL to be executed. The SqlHelper class contains all of the "plumbing" necessary to establish a connection, create a SqlCommand and execute the command against the database with a single static method call.
The main advantage of the Application Blocks is that they greatly reduce the amount of code you need to write by encapsulating common tasks in a wrapper class. While at first glance this may not seem that profound of a benefit, realize that writing less code means more than just shorter time needed to write the code. It also means fewer bugs and typos, and an overall lower total cost to produce the software.
Happy coding :)
Silver Light video tutorails
http://silverlight.net/learn/videocat.aspx?cat=2#HDI2Data
Its a very informative link for the beginners.......
Its a very informative link for the beginners.......
Wednesday, February 18, 2009
ASP.NET Supports Valid HTML Attributes in its Tags
While reading an article, I came across a misconception that seems all too common with people using ASP.NET, and they need to stop trusting intellisense!!! I love intellisense, but you can't trust in it entirely. It doesn't know everything, so as a rule, I will state that if html supports something... SO DOES ASP.NET!
So now that I am done ranting I first want to say I am not intending to bash the author or the publisher site. Both have some great content and are valuable to the .NET community. I simply want to step in and provide some clarity by providing some explanation and an alternative solution to the problem. The article is about adding a tooltip to individual items in a dropdown list. There are plenty of reasons to do this, including the one the author states which is that the list might have a fixed width and will display badly in IE. Below is an example of a DropDownList .
I personally prefer the way Firefox handles a tooltip anyway.
The thing that seems to throw everyone off is that some ASP.NET controls don't seem to have many properties when you look at intellisense. A lot of them don't include style, title, or some other commonly used attributes. THEY ARE STILL THERE!! Underneath the hood ASP.NET is basically just the HTML you know and love. So when you go in and try to add the title attribute to a ListItem, you will not see it in the intellisense box.
I've heard plenty of people complain that some controls don't support the style attribute in ASP.NET. If their underlying HTML control supports a tag it supports a tag. For this example since the option tag supports the title attribute it means that the ListItem of a DropDownList also supports the attribute.
Adding title from code behind
_listItem.Attributes.Add("title", _listItem.Text);
If the above code were to somehow run twice it literally would add two separate title tags into the HTML which is not always the best way of handling things. Just figured I would throw this out there so people have a better understanding how the connection between ASP.NET and HTML. Some people try to separate ASP.NET as some new thing when at the end of the day it is really just creating HTML.
Don't let ASP.NET be mystical, it is relatively easy to understand if you just read articles and blogs. Be curious and questioning of everything. Yes, I mean for you to question what I tell you also. Plenty of things I write over a long period of time will be questionable and some will just be outright wrong. Everyone will do it. Yes, even people writing documentation.
So now that I am done ranting I first want to say I am not intending to bash the author or the publisher site. Both have some great content and are valuable to the .NET community. I simply want to step in and provide some clarity by providing some explanation and an alternative solution to the problem. The article is about adding a tooltip to individual items in a dropdown list. There are plenty of reasons to do this, including the one the author states which is that the list might have a fixed width and will display badly in IE. Below is an example of a DropDownList .
I personally prefer the way Firefox handles a tooltip anyway.
The thing that seems to throw everyone off is that some ASP.NET controls don't seem to have many properties when you look at intellisense. A lot of them don't include style, title, or some other commonly used attributes. THEY ARE STILL THERE!! Underneath the hood ASP.NET is basically just the HTML you know and love. So when you go in and try to add the title attribute to a ListItem, you will not see it in the intellisense box.
I've heard plenty of people complain that some controls don't support the style attribute in ASP.NET. If their underlying HTML control supports a tag it supports a tag. For this example since the option tag supports the title attribute it means that the ListItem of a DropDownList also supports the attribute.
Adding title from code behind
_listItem.Attributes.Add("title", _listItem.Text);
If the above code were to somehow run twice it literally would add two separate title tags into the HTML which is not always the best way of handling things. Just figured I would throw this out there so people have a better understanding how the connection between ASP.NET and HTML. Some people try to separate ASP.NET as some new thing when at the end of the day it is really just creating HTML.
Don't let ASP.NET be mystical, it is relatively easy to understand if you just read articles and blogs. Be curious and questioning of everything. Yes, I mean for you to question what I tell you also. Plenty of things I write over a long period of time will be questionable and some will just be outright wrong. Everyone will do it. Yes, even people writing documentation.
Friday, February 13, 2009
Preventing Multiple Logins in ASP.NET
We talked about the fact that the classic ASP Session_OnEnd handler is widely known to be pretty unreliable. However, in ASP.NET the corresponding Global class handler, Session_End, is very reliable. Then we talked about "what if" scenarios, such as what if the ASP.NET worker process was recycled? If so, I reasoned, it didn't matter whether you were using Session, Application or Cache, all of your stuff would be lost. The only exceptions to this would be if you were using the ASP.NET State Server service for your Session, or the SQL Server Session option. In particular, there is a second script available for the SQL Server Session option that does not use the TempDB, and this means that even if the whole machine goes down, when it comes back up, the Session data will still be there. Both StateServer and SQL Server Session options run out of process, so it really doesn't matter if the ASPNET_WP.EXE worker process is recycled - the sessions, which run out of the ASP.NET worker process and rely on the Session Cookie that's stored at the browser, will still be there.
The main issue is that if you put some sort of "lock" on the user record because somebody has logged in, and then they close their browser and you don't have a reliable way of determining that their session has expired so you can remove the lock, you are likely to get calls to your Tech Support desk from users complaining they cannot log in! (trust me, I have good reports that this has happened...)
The big problem, it turns out, is that with StateServer and SQL Server Sessions, the Session_End event in Global is never fired. Only InProc mode fires this. So in order to avoid Tech Support coming after us with hatchets and knives, we would need to come up with some sort of reliable surrogate for the Session_End event. Robbe took off on his own angle here and wrote an excellent article about using the Cache class to handle some of these issues. You can read it here. Robbe also discusses how to use the callback mechanism in the Cache class to handle the situation where the item is removed from the Cache. In fact, he's determined that this even fires when the ASP.NET worker process recycles under normal conditions (such as when specified in machine.config), thereby enabling us to serialize Cache items to a database for later rehydration.
As it often turns out, sometimes the simplest solution to a problem is also the most elegant and even the most scalable. The solution to the multiple login problem that I came up with and present here simply uses the Cache with SlidingExpiration as a surrogate for a Session_End event. First, here's the logic:
1) User logs in, we check the Cache using username+password as the key for the Cache Item. If the Cache item exists, we know that the login is already in use, so we kick them out. Otherwise, we authenticate them (database, etc) and let them in.
2) After we have let them in, we set a new Cache item entry with a key consisting of their username+password, with a sliding expiration equal to the current Session Timeout value. We can also set a new Session variable, Session["user"], with a value of the username+password, so that we can do continuous page request checking and Cache updating on every page request during the user's session. This gives us the infrastructure for "duplicating" the missing Session_End functionality.
3) Now we need a way to update the Cache expiration on each page request. You can do this very elegantly in the Application_PreRequestHandlerExecute handler in Global, because the Session object is available and "live" in this handler. In addition, this event is fired on every page request, so we don't need to put a single line of extra code in any of our pages. We use the Session["user"] value to get this user's key to retrieve their Cache Item, thus resetting it and automatically setting the sliding expiration to a fresh timeout value. Whenever you access a Cache item, its SlidingExpiration property (if properly configured) is automatically updated. When a user abandons their session and no pages are requested for a period of time, the SlidingExpiration of their Cache Item eventually expires, and the item is automatically removed from the Cache, thereby allowing somebody with the same username and password to log in again. No fuss, no muss! Works with InProc, StateServer and SQL Server Session modes!
Now let's take a look at some code as to how this can be implemented, in its most basic form:
In web.config (StateServer mode, with a one minute timeout to make testing easier):
mode="StateServer"
stateConnectionString="tcpip=127.0.0.1:42424"
sqlConnectionString="data source=127.0.0.1;user id=sa;password=letmein"
cookieless="false"
timeout="1"
/>
In Global.asax.cs:
protected void Application_PreRequestHandlerExecute(Object sender, EventArgs e)
{
// Let's write a message to show this got fired---
Response.Write("SessionID: " +Session.SessionID.ToString() + "User key: " +(string)Session["user"]);
if(Session["user"]!=null) // e.g. this is after an initial logon
{
string sKey=(string)Session["user"];
// Accessing the Cache Item extends the Sliding Expiration automatically
string sUser=(string) HttpContext.Current.Cache[sKey];
}
}
In your Login Page "Login" button handler:
private void Button1_Click(object sender, System.EventArgs e)
{
//validate your user here (Forms Auth or Database, for example)
// this could be a new "illegal" logon, so we need to check
// if these credentials are already in the Cache
string sKey=TextBox1.Text+TextBox2.Text;
string sUser=Convert.ToString(Cache[sKey]);
if (sUser==null || sUser==String.Empty){
// No Cache item, so sesion is either expired or user is new sign-on
// Set the cache item and Session hit-test for this user---
TimeSpan SessTimeOut=new TimeSpan(0,0,HttpContext.Current.Session.Timeout,0,0);
HttpContext.Current.Cache.Insert(sKey,sKey,null,DateTime.MaxValue,SessTimeOut,
System.Web.Caching.CacheItemPriority.NotRemovable,null);
Session["user"]=TextBox1.Text+TextBox2.Text;
// Let them in - redirect to main page, etc.
Label1.Text="";
}
else
{
// cache item exists, so too bad...
Label1.Text="";
return;
}
}
You can try logging in with any username / password you want. If you try again, you won't get in (unless you wait long enough for the Cache Item to expire). Each time you try, the SlidingCache timeout property of the Cache item gets updated (same with any page request). You can try logging in from another browser window, or even another machine. It doesn't matter, you won't be able to abuse the Big Brother license login policy.
What about a Web Farm?
There are certainly trade-offs to be considered when dealing with Sessions on a web farm. StateServer normally is set up to act as a central session server for all the servers in a web farm. By definition, you have to pick a machine and all the web.config entries point to the IP address of that machine. However, I know of at least one organization that uses StateServer on each and every machine on a web farm, and sticky IP to make sure that everybody always returns to the machine where their Session was started. While this configuration might seem like "shooting yourself in the foot", it is conceiveable that an organization might opt for this where redundancy, rather than scalability, is the overriding consideration. (Of course, if you have StateServer and Sticky IP on every machine in the farm, and only one SQL Server with no clustering and failover, the jury might still be out on how much redundancy you have actually achieved).
If your overriding concern is that the particular StateServer machine may "go down" then your only other option would be to use the SQL Server session mode and choose the SQL Script "InstallPersistSqlState.sql" which specifically does not use the TempDB (TempDB disappears when a machine is rebooted).
There is no sharing of cache between web applications on a farm. Also it was brought to my attention by reader Paul Abraham (who has provided helpful comments on more than one occasion here) that if we have a multi-processor machine, we can configure it to webgarden mode, in which case we will have more than one worker process. Consequently, we will then have more than one instance of the System.Web.Caching.Cache class operative in our application. (one instance of this class is created per application domain) In this context, we would then have the same problem synchronizing Cache in WebGarden mode that we would in a web farm scenario.
In these situations, you can be creative with CacheDependency and CacheItemRemovedCallback. For example, on each web server (or AppDomain) your cache objects can depend on a special file, and on cache addition or removal touch that file so that cache objects on other web servers can get notified and be removed. Now that I think of it, you could even use the very same file that the dependency is created on to store the data that each server needs to get in order to update its Cache.
There is a bug in ASP.NET 1.0 where multiple web applications having cachedependency on a file at a UNC share is not working. So one workaround is to have one file per web application per web server, and during update you would touch all of them. Another thing to remember about a server farm - if you are sharing Session state with StateServer or SQL Server, the SessionID, which is contained in a browser cookie or munged on the URL does get transmitted for the particular user no matter which server their request lands on.
So if you match the ASP.NET Session ID to the username+Password of the login, you have a method to check the Cache on any of the servers to handle both session checking and timeout updating. There is also an excellent article by David Burgett on MSDN about using in-memory Datasets and a WebService to synchronize data in a farm.
Cache Synchronization Down on the Farm
While creating a shared Cache object among servers on a farm is beyond the scope of this article, it is definitely "do-able" and hopefully the above ideas will give you some food for thought. Synchronization of Cache on a server farm is one thing that Mircrosoft left out of the Cache class. However, based on the ideas brought up in this article, it can be seen that there are likely a number of uses for such an arrangement.
One way to set up Cache synchronization among servers in a web farm is to use SQL Server and have two tables - one with a list of the servers currently active in the web farm, and a second table to hold "Update" information for the cache. This "CacheItems" table would probably need at least three or four columns: a varchar column for the cache "key" (in this case username+password), a DateTime column for current Sliding Expiration value, another DateTime column for Absolute Expiration (if used), and finally an IMAGE column to hold the byte stream from the serialized Object Graph of the Cache item, using the BinaryFormatter., in order to store complex objects from the Cache in the same way that SQL Server Session state does. In this manner it would be possibly not only to synchronize the Cache among servers in a farm, but to actually create a backup "Persistent Cache" datastore from which a rebooting or first - time farm member machine can hydrate its Cache and "join the chorus" , so to speak.
So for example, when a session expires in the Cache on one server, you can make an update using the SQL Server to a Cache persistent storage table. This update can made through a WebRequest which is sent to each of the servers on the farm to a special aspx receiver page that is in each app domain. This receiver page basically gets the "notification" and instructs the page to go to the SQL server and update it's resident copy of the Cache from the SQL Server table described above. Each machine would have a page that is capable of handling this process, and thus every machine on the Farm would have the capability both to update the backup store and notify the other webservers, as well as to receive a notification that it needs to retrieve and process the update record(s) from SQL Server.
The main issue is that if you put some sort of "lock" on the user record because somebody has logged in, and then they close their browser and you don't have a reliable way of determining that their session has expired so you can remove the lock, you are likely to get calls to your Tech Support desk from users complaining they cannot log in! (trust me, I have good reports that this has happened...)
The big problem, it turns out, is that with StateServer and SQL Server Sessions, the Session_End event in Global is never fired. Only InProc mode fires this. So in order to avoid Tech Support coming after us with hatchets and knives, we would need to come up with some sort of reliable surrogate for the Session_End event. Robbe took off on his own angle here and wrote an excellent article about using the Cache class to handle some of these issues. You can read it here. Robbe also discusses how to use the callback mechanism in the Cache class to handle the situation where the item is removed from the Cache. In fact, he's determined that this even fires when the ASP.NET worker process recycles under normal conditions (such as when specified in machine.config), thereby enabling us to serialize Cache items to a database for later rehydration.
As it often turns out, sometimes the simplest solution to a problem is also the most elegant and even the most scalable. The solution to the multiple login problem that I came up with and present here simply uses the Cache with SlidingExpiration as a surrogate for a Session_End event. First, here's the logic:
1) User logs in, we check the Cache using username+password as the key for the Cache Item. If the Cache item exists, we know that the login is already in use, so we kick them out. Otherwise, we authenticate them (database, etc) and let them in.
2) After we have let them in, we set a new Cache item entry with a key consisting of their username+password, with a sliding expiration equal to the current Session Timeout value. We can also set a new Session variable, Session["user"], with a value of the username+password, so that we can do continuous page request checking and Cache updating on every page request during the user's session. This gives us the infrastructure for "duplicating" the missing Session_End functionality.
3) Now we need a way to update the Cache expiration on each page request. You can do this very elegantly in the Application_PreRequestHandlerExecute handler in Global, because the Session object is available and "live" in this handler. In addition, this event is fired on every page request, so we don't need to put a single line of extra code in any of our pages. We use the Session["user"] value to get this user's key to retrieve their Cache Item, thus resetting it and automatically setting the sliding expiration to a fresh timeout value. Whenever you access a Cache item, its SlidingExpiration property (if properly configured) is automatically updated. When a user abandons their session and no pages are requested for a period of time, the SlidingExpiration of their Cache Item eventually expires, and the item is automatically removed from the Cache, thereby allowing somebody with the same username and password to log in again. No fuss, no muss! Works with InProc, StateServer and SQL Server Session modes!
Now let's take a look at some code as to how this can be implemented, in its most basic form:
In web.config (StateServer mode, with a one minute timeout to make testing easier):
stateConnectionString="tcpip=127.0.0.1:42424"
sqlConnectionString="data source=127.0.0.1;user id=sa;password=letmein"
cookieless="false"
timeout="1"
/>
In Global.asax.cs:
protected void Application_PreRequestHandlerExecute(Object sender, EventArgs e)
{
// Let's write a message to show this got fired---
Response.Write("SessionID: " +Session.SessionID.ToString() + "User key: " +(string)Session["user"]);
if(Session["user"]!=null) // e.g. this is after an initial logon
{
string sKey=(string)Session["user"];
// Accessing the Cache Item extends the Sliding Expiration automatically
string sUser=(string) HttpContext.Current.Cache[sKey];
}
}
In your Login Page "Login" button handler:
private void Button1_Click(object sender, System.EventArgs e)
{
//validate your user here (Forms Auth or Database, for example)
// this could be a new "illegal" logon, so we need to check
// if these credentials are already in the Cache
string sKey=TextBox1.Text+TextBox2.Text;
string sUser=Convert.ToString(Cache[sKey]);
if (sUser==null || sUser==String.Empty){
// No Cache item, so sesion is either expired or user is new sign-on
// Set the cache item and Session hit-test for this user---
TimeSpan SessTimeOut=new TimeSpan(0,0,HttpContext.Current.Session.Timeout,0,0);
HttpContext.Current.Cache.Insert(sKey,sKey,null,DateTime.MaxValue,SessTimeOut,
System.Web.Caching.CacheItemPriority.NotRemovable,null);
Session["user"]=TextBox1.Text+TextBox2.Text;
// Let them in - redirect to main page, etc.
Label1.Text="";
}
else
{
// cache item exists, so too bad...
Label1.Text="";
return;
}
}
You can try logging in with any username / password you want. If you try again, you won't get in (unless you wait long enough for the Cache Item to expire). Each time you try, the SlidingCache timeout property of the Cache item gets updated (same with any page request). You can try logging in from another browser window, or even another machine. It doesn't matter, you won't be able to abuse the Big Brother license login policy.
What about a Web Farm?
There are certainly trade-offs to be considered when dealing with Sessions on a web farm. StateServer normally is set up to act as a central session server for all the servers in a web farm. By definition, you have to pick a machine and all the web.config entries point to the IP address of that machine. However, I know of at least one organization that uses StateServer on each and every machine on a web farm, and sticky IP to make sure that everybody always returns to the machine where their Session was started. While this configuration might seem like "shooting yourself in the foot", it is conceiveable that an organization might opt for this where redundancy, rather than scalability, is the overriding consideration. (Of course, if you have StateServer and Sticky IP on every machine in the farm, and only one SQL Server with no clustering and failover, the jury might still be out on how much redundancy you have actually achieved).
If your overriding concern is that the particular StateServer machine may "go down" then your only other option would be to use the SQL Server session mode and choose the SQL Script "InstallPersistSqlState.sql" which specifically does not use the TempDB (TempDB disappears when a machine is rebooted).
There is no sharing of cache between web applications on a farm. Also it was brought to my attention by reader Paul Abraham (who has provided helpful comments on more than one occasion here) that if we have a multi-processor machine, we can configure it to webgarden mode, in which case we will have more than one worker process. Consequently, we will then have more than one instance of the System.Web.Caching.Cache class operative in our application. (one instance of this class is created per application domain) In this context, we would then have the same problem synchronizing Cache in WebGarden mode that we would in a web farm scenario.
In these situations, you can be creative with CacheDependency and CacheItemRemovedCallback. For example, on each web server (or AppDomain) your cache objects can depend on a special file, and on cache addition or removal touch that file so that cache objects on other web servers can get notified and be removed. Now that I think of it, you could even use the very same file that the dependency is created on to store the data that each server needs to get in order to update its Cache.
There is a bug in ASP.NET 1.0 where multiple web applications having cachedependency on a file at a UNC share is not working. So one workaround is to have one file per web application per web server, and during update you would touch all of them. Another thing to remember about a server farm - if you are sharing Session state with StateServer or SQL Server, the SessionID, which is contained in a browser cookie or munged on the URL does get transmitted for the particular user no matter which server their request lands on.
So if you match the ASP.NET Session ID to the username+Password of the login, you have a method to check the Cache on any of the servers to handle both session checking and timeout updating. There is also an excellent article by David Burgett on MSDN about using in-memory Datasets and a WebService to synchronize data in a farm.
Cache Synchronization Down on the Farm
While creating a shared Cache object among servers on a farm is beyond the scope of this article, it is definitely "do-able" and hopefully the above ideas will give you some food for thought. Synchronization of Cache on a server farm is one thing that Mircrosoft left out of the Cache class. However, based on the ideas brought up in this article, it can be seen that there are likely a number of uses for such an arrangement.
One way to set up Cache synchronization among servers in a web farm is to use SQL Server and have two tables - one with a list of the servers currently active in the web farm, and a second table to hold "Update" information for the cache. This "CacheItems" table would probably need at least three or four columns: a varchar column for the cache "key" (in this case username+password), a DateTime column for current Sliding Expiration value, another DateTime column for Absolute Expiration (if used), and finally an IMAGE column to hold the byte stream from the serialized Object Graph of the Cache item, using the BinaryFormatter., in order to store complex objects from the Cache in the same way that SQL Server Session state does. In this manner it would be possibly not only to synchronize the Cache among servers in a farm, but to actually create a backup "Persistent Cache" datastore from which a rebooting or first - time farm member machine can hydrate its Cache and "join the chorus" , so to speak.
So for example, when a session expires in the Cache on one server, you can make an update using the SQL Server to a Cache persistent storage table. This update can made through a WebRequest which is sent to each of the servers on the farm to a special aspx receiver page that is in each app domain. This receiver page basically gets the "notification" and instructs the page to go to the SQL server and update it's resident copy of the Cache from the SQL Server table described above. Each machine would have a page that is capable of handling this process, and thus every machine on the Farm would have the capability both to update the backup store and notify the other webservers, as well as to receive a notification that it needs to retrieve and process the update record(s) from SQL Server.
Setting authorization rules for a particular page or folder
I have seen so many people asking again and again how to give allow access to particular page to a person or roles. So I thought its good to put this in one place. I will discuss how to configure web.config depending on the scenario.
We will start with a web.config without any authorization and modify it on case by case bassis.
No Authorization
We will start with the root web.config without any authorization.
Deny Anonymous user to access entire website
Allow access to everyone to a particular page
Sometimes you want to allow public access to your registeration page and want to restrict access to rest of the site only to logged / authenticated users .i.e. do not allow anonymous access. Say your registration page is called register.aspx in your site's root folder. In the web.config of your website's root folder you need to have following setup.
//this will restrict anonymous user access
//path here is path to your register.aspx page e.g. it could be ~/publicpages/register.aspx
// this will allow access to everyone to register.aspx
Till now we saw either allow users or to authenticated users only. But there could be cases where we want to allow particular user to certain pages but deny everyone else (authenticated as well as anonymous).
To allow access to particular user only and deny everyone else
Say you want to give access to user "John" to a particular page e.g. userpersonal.aspx and deny all others the location tag above should look like below:
// allow John ..note: you can have multiple users seperated by comma e.g. John,Mary,etc
// deny others
Allow only users in particular Role
Here I am will not show how to setup roles. I assume you have roles managment setup for users. We will see now what needs to be done in web.config to configure authorization for a particular role. e.g You have two roles. Customer and Admin and two folders CustomerFolder and AdminFolder. Users in Admin role can access both folders. Users in Customers role can access only CustomerFolder and not AdminFolder. You will have to add location tags for each folder path as shown below:
//Allows users in Admin role
// deny everyone else
//Allow users in Admin and Customers roles
// Deny rest of all
Alternate way - using individual web.config for each Folder
Alternative to above mentioned method of using tag, you can add web.config to each folder and configure authorization accordingly almost similar to one show above but not using location tag. Taking same eg. as above. Add web.config to both the folders - AdminFolder and CustomerFolder.
Web.config in AdminFolder should look like:
//Allows users in Admin role
// deny everyone else
Web.config in CustomerFolder should look like:
//Allow users in Admin and Customers roles
// Deny rest of all
Images and CSS files
Say you have all your images and CSS in a seperate folder called images and you are denying anonymous access to your website. In that case you might see that on your login page you cannot see images(if any) and css(if any) applied to your login page controls.
In that case you can add a web.config to the images and css folder and allow access to everyone to that folder. So your web.config in images folder should look as below:
//Allow everyone
Common Mistakes
I have seen people complaining that they have setup their roles correctly and also made entry to their web.config but still their authorization doesn't work. Even they have allowed access to their role that user cannot access particular page/folder. The common reason for that is placing before .
Say the web.config from AdminFolder as we have seen before is something like this:
//This web.config will not allow access to users even they are in Admin Role
// deny everyone else
//Allows users in Admin role
Since the authorization is done from top to bottom, rules are checked until a match is found. Here we have first and so it will not check for allow any more and deny access even if in Admin role.
So PUT all allows BEFORE ANY deny.
NOTE: deny works the same way as allow. You can deny particular roles or users as per your requirement.
I hope this will answer some of the question regarding how to authorize pages / folders(directories). Comments welcome
We will start with a web.config without any authorization and modify it on case by case bassis.
No Authorization
We will start with the root web.config without any authorization.
Deny Anonymous user to access entire website
This is the case when you want everybody to login before the can start browsing around your website. i.e. The first thing they will see is a login page.
The above situation is good when user don't have to register themselves but instead their user account is created by some administrator.
Allow access to everyone to a particular page
Sometimes you want to allow public access to your registeration page and want to restrict access to rest of the site only to logged / authenticated users .i.e. do not allow anonymous access. Say your registration page is called register.aspx in your site's root folder. In the web.config of your website's root folder you need to have following setup.
Till now we saw either allow users or to authenticated users only. But there could be cases where we want to allow particular user to certain pages but deny everyone else (authenticated as well as anonymous).
To allow access to particular user only and deny everyone else
Say you want to give access to user "John" to a particular page e.g. userpersonal.aspx and deny all others the location tag above should look like below:
Allow only users in particular Role
Here I am will not show how to setup roles. I assume you have roles managment setup for users. We will see now what needs to be done in web.config to configure authorization for a particular role. e.g You have two roles. Customer and Admin and two folders CustomerFolder and AdminFolder. Users in Admin role can access both folders. Users in Customers role can access only CustomerFolder and not AdminFolder. You will have to add location tags for each folder path as shown below:
Alternate way - using individual web.config for each Folder
Alternative to above mentioned method of using
Web.config in AdminFolder should look like:
Web.config in CustomerFolder should look like:
Images and CSS files
Say you have all your images and CSS in a seperate folder called images and you are denying anonymous access to your website. In that case you might see that on your login page you cannot see images(if any) and css(if any) applied to your login page controls.
In that case you can add a web.config to the images and css folder and allow access to everyone to that folder. So your web.config in images folder should look as below:
Common Mistakes
I have seen people complaining that they have setup their roles correctly and also made entry to their web.config but still their authorization doesn't work. Even they have allowed access to their role that user cannot access particular page/folder. The common reason for that is placing
Say the web.config from AdminFolder as we have seen before is something like this:
//This web.config will not allow access to users even they are in Admin Role
Since the authorization is done from top to bottom, rules are checked until a match is found. Here we have
So PUT all allows BEFORE ANY deny.
NOTE: deny works the same way as allow. You can deny particular roles or users as per your requirement.
I hope this will answer some of the question regarding how to authorize pages / folders(directories). Comments welcome
Monday, February 9, 2009
Saturday, February 7, 2009
Friday, February 6, 2009
Reading and Displaying Source of Web Pages
In asp.net 1.1
we have two ways
1.Dim arrbyte() as byte = System.Net.WebClient.DownloadData(URL)
2. Dim strHtml = System.Text.Encoding.Default.GetString(arrbyte)
Second Way is to create Webrequest and get webresponse
public void MySource(Uri weburi)
{
System.Text.StringBuilder sbuild=new StringBuilder();
string temp="";
System.Net.HttpWebRequest webrequest = (HttpWebRequest)System.Net.WebRequest.Create(weburi);
System.Net.HttpWebResponse webresponse=(HttpWebResponse) webrequest.GetResponse();
StreamReader webstream = new StreamReader(webresponse.GetResponseStream(),Encoding.ASCII );
while((temp=webstream.ReadLine())!= null)
{
sbuild.Append(temp + "\r\n");
}
webstream.Close();
Response.Write("OK");
Response.Write (sbuild.ToString() );
In asp.net 2.0
Dim strPageHTML as string= System.Net.WebClient.Dowloadstring(URL)
we have two ways
1.Dim arrbyte() as byte = System.Net.WebClient.DownloadData(URL)
2. Dim strHtml = System.Text.Encoding.Default.GetString(arrbyte)
Second Way is to create Webrequest and get webresponse
public void MySource(Uri weburi)
{
System.Text.StringBuilder sbuild=new StringBuilder();
string temp="";
System.Net.HttpWebRequest webrequest = (HttpWebRequest)System.Net.WebRequest.Create(weburi);
System.Net.HttpWebResponse webresponse=(HttpWebResponse) webrequest.GetResponse();
StreamReader webstream = new StreamReader(webresponse.GetResponseStream(),Encoding.ASCII );
while((temp=webstream.ReadLine())!= null)
{
sbuild.Append(temp + "\r\n");
}
webstream.Close();
Response.Write("OK");
Response.Write (sbuild.ToString() );
In asp.net 2.0
Dim strPageHTML as string= System.Net.WebClient.Dowloadstring(URL)
Thursday, February 5, 2009
Wednesday, February 4, 2009
How to get multiple records in one column?
Suppose You need to get multiple records in a single variable or column
Decalre @var Nvarchar(1000)
SELECT @var = IFNULL(@var,'') + var1 + var2
FROM table
In this way you can get multiple records in a single variable......
Happy Coding :P
Decalre @var Nvarchar(1000)
SELECT @var = IFNULL(@var,'') + var1 + var2
FROM table
In this way you can get multiple records in a single variable......
Happy Coding :P
Saturday, January 31, 2009
Wednesday, January 28, 2009
Sunday, January 25, 2009
Saturday, January 24, 2009
csv not opening in IE -7
Hye - Its basically a settings Issue.
Go to Tools -> InternetOptions -> Security Tab ->CustomLevel
In Download -Automatic Prompting for file downloads
enable this setting
Happy Development .........
:)
Go to Tools -> InternetOptions -> Security Tab ->CustomLevel
In Download -Automatic Prompting for file downloads
enable this setting
Happy Development .........
:)
Subscribe to:
Comments (Atom)