\");\n}\n\n```\n\nThe key to this method is in that onload bit of Javascript, which , when the body of the page loads, submits the form and then redirects the user back to my own Home page. The reason for that bit of hoodoo is that I'm launching the external site in a new window, but don't want the user to resubmit the hidden form if they refresh the page. Plus that hidden form pushed the page down a few pixels which got on my nerves.\n\nI'd be very interested in any cleaner ideas anyone has on this one.\n\nEric Sipple"}}},{"rowIdx":454,"cells":{"qid":{"kind":"number","value":5188,"string":"5,188"},"question":{"kind":"string","value":"
I have a web reference for our report server embedded in our application. The server that the reports live on could change though, and I'd like to be able to change it \"on the fly\" if necessary.
\n\n
I know I've done this before, but can't seem to remember how. Thanks for your help.
\n\n
I've manually driven around this for the time being. It's not a big deal to set the URL in the code, but I'd like to figure out what the \"proper\" way of doing this in VS 2008 is. Could anyone provide any further insights? Thanks!
\n\n
\n\n
In VS2008 when I change the URL Behavior property to Dynamic I get the following code auto-generated in the Reference class.
\n\n
Can I override this setting (MySettings) in the web.config? I guess I don't know how the settings stuff works.
\n\n
Public Sub New()\n MyBase.New\n Me.Url = Global.My.MySettings.Default.Namespace_Reference_ServiceName\n If (Me.IsLocalFileSystemWebService(Me.Url) = true) Then\n Me.UseDefaultCredentials = true\n Me.useDefaultCredentialsSetExplicitly = false\n Else\n Me.useDefaultCredentialsSetExplicitly = true\n End If\nEnd Sub\n
\n\n
EDIT
\n\n
So this stuff has changed a bit since VS03 (which was probably the last VS version I used to do this).
\n\n
According to: http://msdn.microsoft.com/en-us/library/a65txexh.aspx it looks like I have a settings object on which I can set the property programatically, but that I would need to provide the logic to retrieve that URL from the web.config.
\n\n
Is this the new standard way of doing this in VS2008, or am I missing something?
\n\n
EDIT #2
\n\n
Anyone have any ideas here? I drove around it in my application and just put the URL in my web.config myself and read it out. But I'm not happy with that because it still feels like I'm missing something.
If you mean a VS2005 \"Web Reference\", then the generated proxy classes have a URL property that is the SOAP endpoint url of that service. You can change this property and have your subsequent http communications be made to that new endpoint.
\n\n
Edit: Ah, thanks bcaff86. I didn't know you could do that simply by changing a property.
If you mean a VS2005 \\\"Web Reference\\\", then the generated proxy classes have a URL property that is the SOAP endpoint url of that service. You can change this property and have your subsequent http communications be made to that new endpoint.
\\n\\n
Edit: Ah, thanks bcaff86. I didn't know you could do that simply by changing a property.
\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/07"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/5188","https://Stackoverflow.com","https://Stackoverflow.com/users/326/"],"string":"[\n \"https://Stackoverflow.com/questions/5188\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/326/\"\n]"},"input":{"kind":"string","value":"I have a web reference for our report server embedded in our application. The server that the reports live on could change though, and I'd like to be able to change it \"on the fly\" if necessary.\n\nI know I've done this before, but can't seem to remember how. Thanks for your help.\n\nI've manually driven around this for the time being. It's not a big deal to set the URL in the code, but I'd like to figure out what the \"proper\" way of doing this in VS 2008 is. Could anyone provide any further insights? Thanks!\n\n---\n\nIn **VS2008** when I change the URL Behavior property to Dynamic I get the following code auto-generated in the Reference class.\n\nCan I override this setting (MySettings) in the web.config? I guess I don't know how the settings stuff works.\n\n```\nPublic Sub New()\n MyBase.New\n Me.Url = Global.My.MySettings.Default.Namespace_Reference_ServiceName\n If (Me.IsLocalFileSystemWebService(Me.Url) = true) Then\n Me.UseDefaultCredentials = true\n Me.useDefaultCredentialsSetExplicitly = false\n Else\n Me.useDefaultCredentialsSetExplicitly = true\n End If\nEnd Sub\n\n```\n\n*EDIT*\n\nSo this stuff has changed a bit since VS03 (which was probably the last VS version I used to do this).\n\nAccording to: it looks like I have a settings object on which I can set the property programatically, but that I would need to provide the logic to retrieve that URL from the web.config.\n\nIs this the new standard way of doing this in VS2008, or am I missing something?\n\n*EDIT #2*\n\nAnyone have any ideas here? I drove around it in my application and just put the URL in my web.config myself and read it out. But I'm not happy with that because it still feels like I'm missing something."},"output":{"kind":"string","value":"In the properties window change the \"behavior\" to Dynamic.\n\nSee: "}}},{"rowIdx":455,"cells":{"qid":{"kind":"number","value":5194,"string":"5,194"},"question":{"kind":"string","value":"
I am prototyping some C# 3 collection filters and came across this.\nI have a collection of products:
\n\n
public class MyProduct\n{\n public string Name { get; set; }\n public Double Price { get; set; }\n public string Description { get; set; }\n}\n\nvar MyProducts = new List<MyProduct>\n{ \n new MyProduct\n {\n Name = \"Surfboard\",\n Price = 144.99,\n Description = \"Most important thing you will ever own.\"\n },\n new MyProduct\n {\n Name = \"Leash\",\n Price = 29.28,\n Description = \"Keep important things close to you.\"\n }\n ,\n new MyProduct\n {\n Name = \"Sun Screen\",\n Price = 15.88,\n Description = \"1000 SPF! Who Could ask for more?\"\n }\n};\n
\n\n
Now if I use LINQ to filter it works as expected:
\n\n
var d = (from mp in MyProducts\n where mp.Price < 50d\n select mp);\n
\n\n
And if I use the Where extension method combined with a Lambda the filter works as well:
\n\n
var f = MyProducts.Where(mp => mp.Price < 50d).ToList();\n
\n\n
Question: What is the difference, and why use one over the other?
\n"},"answers":{"kind":"list like","value":[{"answer_id":5199,"author":"Lasse V. Karlsen","author_id":267,"author_profile":"https://Stackoverflow.com/users/267","pm_score":4,"selected":true,"text":"
LINQ turns into method calls like the code you have.
\n\n
In other words, there should be no difference.
\n\n
However, in your two pieces of code you are not calling .ToList in the first, so the first piece of code will produce an enumerable data source, but if you call .ToList on it, the two should be the same.
As mentioned d will be IEnumerable<MyProduct> while f is List<MyProduct>
\n\n
The conversion is done by the C# compiler
\n\n
var d = \n from mp in MyProducts\n where mp.Price < 50d\n select mp;\n
\n\n
Is converted to (before compilation to IL and with generics expanded):
\n\n
var d = \n MyProducts.\n Where<MyProduct>( mp => mp.Price < 50d ).\n Select<MyProduct>( mp => mp ); \n //note that this last select is optimised out if it makes no change\n
\n\n
Note that in this simple case it makes little difference. Where Linq becomes really valuable is in much more complicated loops.
\n\n
For instance this statement could include group-bys, orders and a few let statements and still be readable in Linq format when the equivalent .Method().Method.Method() would get complicated.
The syntax you are using for d will get transformed by the compiler into the same IL as the extension methods. The \"SQL-like\" syntax is supposed to be a more natural way to represent a LINQ expression (although I personally prefer the extension methods). As has already been pointed out, the first example will return an IEnumerable result while the second example will return a List result due to the call to ToList(). If you remove the ToList() call in the second example, they will both return the same result as Where returns an IEnumerable result.
LINQ turns into method calls like the code you have.
\\n\\n
In other words, there should be no difference.
\\n\\n
However, in your two pieces of code you are not calling .ToList in the first, so the first piece of code will produce an enumerable data source, but if you call .ToList on it, the two should be the same.
As mentioned d will be IEnumerable<MyProduct> while f is List<MyProduct>
\\n\\n
The conversion is done by the C# compiler
\\n\\n
var d = \\n from mp in MyProducts\\n where mp.Price < 50d\\n select mp;\\n
\\n\\n
Is converted to (before compilation to IL and with generics expanded):
\\n\\n
var d = \\n MyProducts.\\n Where<MyProduct>( mp => mp.Price < 50d ).\\n Select<MyProduct>( mp => mp ); \\n //note that this last select is optimised out if it makes no change\\n
\\n\\n
Note that in this simple case it makes little difference. Where Linq becomes really valuable is in much more complicated loops.
\\n\\n
For instance this statement could include group-bys, orders and a few let statements and still be readable in Linq format when the equivalent .Method().Method.Method() would get complicated.
The syntax you are using for d will get transformed by the compiler into the same IL as the extension methods. The \\\"SQL-like\\\" syntax is supposed to be a more natural way to represent a LINQ expression (although I personally prefer the extension methods). As has already been pointed out, the first example will return an IEnumerable result while the second example will return a List result due to the call to ToList(). If you remove the ToList() call in the second example, they will both return the same result as Where returns an IEnumerable result.
\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/07"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/5194","https://Stackoverflow.com","https://Stackoverflow.com/users/439/"],"string":"[\n \"https://Stackoverflow.com/questions/5194\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/439/\"\n]"},"input":{"kind":"string","value":"I am prototyping some C# 3 collection filters and came across this.\nI have a collection of products:\n\n```\npublic class MyProduct\n{\n public string Name { get; set; }\n public Double Price { get; set; }\n public string Description { get; set; }\n}\n\nvar MyProducts = new List\n{ \n new MyProduct\n {\n Name = \"Surfboard\",\n Price = 144.99,\n Description = \"Most important thing you will ever own.\"\n },\n new MyProduct\n {\n Name = \"Leash\",\n Price = 29.28,\n Description = \"Keep important things close to you.\"\n }\n ,\n new MyProduct\n {\n Name = \"Sun Screen\",\n Price = 15.88,\n Description = \"1000 SPF! Who Could ask for more?\"\n }\n};\n\n```\n\nNow if I use LINQ to filter it works as expected:\n\n```\nvar d = (from mp in MyProducts\n where mp.Price < 50d\n select mp);\n\n```\n\nAnd if I use the Where extension method combined with a Lambda the filter works as well:\n\n```\nvar f = MyProducts.Where(mp => mp.Price < 50d).ToList();\n\n```\n\n**Question:** What is the difference, and why use one over the other?"},"output":{"kind":"string","value":"LINQ turns into method calls like the code you have.\n\nIn other words, there should be no difference.\n\nHowever, in your two pieces of code you are not calling .ToList in the first, so the first piece of code will produce an enumerable data source, but if you call .ToList on it, the two should be the same."}}},{"rowIdx":456,"cells":{"qid":{"kind":"number","value":5223,"string":"5,223"},"question":{"kind":"string","value":"
I have a JavaScript object. Is there a built-in or accepted best practice way to get the length of this object?
I'm not a JavaScript expert, but it looks like you would have to loop through the elements and count them since Object doesn't have a length method:
\n\n
var element_count = 0;\nfor (e in myArray) { if (myArray.hasOwnProperty(e)) element_count++; }\n
\n\n
@palmsey: In fairness to the OP, the JavaScript documentation actually explicitly refer to using variables of type Object in this manner as \"associative arrays\".
@palmsey: In fairness to the OP, the JavaScript documentation actually explicitly refer to using variables of type Object in this manner as "associative arrays".
\n
\n
\n
And in fairness to @palmsey he was quite correct. They aren't associative arrays; they're definitely objects :) - doing the job of an associative array. But as regards to the wider point, you definitely seem to have the right of it according to this rather fine article I found:
If anything else has been added to Object .prototype, then the suggested code will fail:
\n
<script type="text/javascript">\nObject.prototype.size = function () {\n var len = this.length ? --this.length : -1;\n for (var k in this)\n len++;\n return len;\n}\nObject.prototype.size2 = function () {\n var len = this.length ? --this.length : -1;\n for (var k in this)\n len++;\n return len;\n}\nvar myArray = new Object();\nmyArray["firstname"] = "Gareth";\nmyArray["lastname"] = "Simpson";\nmyArray["age"] = 21;\nalert("age is " + myArray["age"]);\nalert("length is " + myArray.size());\n</script>\n
\n
I don't think that answer should be the accepted one as it can't be trusted to work if you have any other code running in the same execution context. To do it in a robust fashion, surely you would need to define the size method within myArray and check for the type of the members as you iterate through them.
Here's an update as of 2016 and widespread deployment of ES5 and beyond. For IE9+ and all other modern ES5+ capable browsers, you can use Object.keys() so the above code just becomes:
\n
var size = Object.keys(myObj).length;\n
\n
This doesn't have to modify any existing prototype since Object.keys() is now built-in.
\n
Edit: Objects can have symbolic properties that can not be returned via Object.key method. So the answer would be incomplete without mentioning them.
\n
Symbol type was added to the language to create unique identifiers for object properties. The main benefit of the Symbol type is the prevention of overwrites.
\n
Object.keys or Object.getOwnPropertyNames does not work for symbolic properties. To return them you need to use Object.getOwnPropertySymbols.
The most robust answer (i.e. that captures the intent of what you're trying to do while causing the fewest bugs) would be:
\n
\r\n
\r\n
Object.size = function(obj) {\n var size = 0,\n key;\n for (key in obj) {\n if (obj.hasOwnProperty(key)) size++;\n }\n return size;\n};\n\n// Get the size of an object\nconst myObj = {}\nvar size = Object.size(myObj);
\r\n
\r\n
\r\n
\n
There's a sort of convention in JavaScript that you don't add things to Object.prototype, because it can break enumerations in various libraries. Adding methods to Object is usually safe, though.
To not mess with the prototype or other code, you could build and extend your own object:
\n\n
function Hash(){\n var length=0;\n this.add = function(key, val){\n if(this[key] == undefined)\n {\n length++;\n }\n this[key]=val;\n }; \n this.length = function(){\n return length;\n };\n}\n\nmyArray = new Hash();\nmyArray.add(\"lastname\", \"Simpson\");\nmyArray.add(\"age\", 21);\nalert(myArray.length()); // will alert 2\n
\n\n
If you always use the add method, the length property will be correct. If you're worried that you or others forget about using it, you could add the property counter which the others have posted to the length method, too.
\n\n
Of course, you could always overwrite the methods. But even if you do, your code would probably fail noticeably, making it easy to debug. ;)
For some cases it is better to just store the size in a separate variable. Especially, if you're adding to the array by one element in one place and can easily increment the size. It would obviously work much faster if you need to check the size often.
Here is a completely different solution that will only work in more modern browsers (Internet Explorer 9+, Chrome, Firefox 4+, Opera 11.60+, and Safari 5.1+)
/**\n * @constructor\n */\nAssociativeArray = function () {};\n\n// Make the length property work\nObject.defineProperty(AssociativeArray.prototype, "length", {\n get: function () {\n var count = 0;\n for (var key in this) {\n if (this.hasOwnProperty(key))\n count++;\n }\n return count;\n }\n});\n
\n
Now you can use this code as follows...
\n
var a1 = new AssociativeArray();\na1["prop1"] = "test";\na1["prop2"] = 1234;\na1["prop3"] = "something else";\nalert("Length of array is " + a1.length);\n
This is better than the accepted answer because it uses native Object.keys if exists.\nThus, it is the fastest for all modern browsers.
\n\n
if (!Object.keys) {\n Object.keys = function (obj) {\n var arr = [],\n key;\n for (key in obj) {\n if (obj.hasOwnProperty(key)) {\n arr.push(key);\n }\n }\n return arr;\n };\n}\n\nObject.keys(obj).length;\n
Like most JavaScript problems, there are many solutions. You could extend the Object that for better or worse works like many other languages' Dictionary (+ first class citizens). Nothing wrong with that, but another option is to construct a new Object that meets your specific needs.
\n\n
function uberject(obj){\n this._count = 0;\n for(var param in obj){\n this[param] = obj[param];\n this._count++;\n }\n}\n\nuberject.prototype.getLength = function(){\n return this._count;\n};\n\nvar foo = new uberject({bar:123,baz:456});\nalert(foo.getLength());\n
Object.defineProperty(Object.prototype, 'length', {\n get: function () {\n var size = 0, key;\n for (key in this)\n if (this.hasOwnProperty(key))\n size++;\n return size;\n }\n});\n
\n
Use
\n
var o = {a: 1, b: 2, c: 3};\nalert(o.length); // <-- 3\no['foo'] = 123;\nalert(o.length); // <-- 4\n
var myObject = {}; // ... your object goes here.\n\n var length = 0;\n\n for (var property in myObject) {\n if (myObject.hasOwnProperty(property)){\n length += 1;\n }\n };\n\n console.log(length); // logs 0 in my example.\n
If you are using AngularJS 1.x you can do things the AngularJS way by creating a filter and using the code from any of the other examples such as the following:
\n\n
// Count the elements in an object\napp.filter('lengthOfObject', function() {\n return function( obj ) {\n var size = 0, key;\n for (key in obj) {\n if (obj.hasOwnProperty(key)) size++;\n }\n return size;\n }\n})\n
If you don't care about supporting Internet Explorer 8 or lower, you can easily get the number of properties in an object by applying the following two steps:
\n\n
\n
Run either Object.keys() to get an array that contains the names of only those properties that are enumerable or Object.getOwnPropertyNames() if you want to also include the names of properties that are not enumerable.
The solution work for many cases and cross browser:
\n\n
Code
\n\n
var getTotal = function(collection) {\n\n var length = collection['length'];\n var isArrayObject = typeof length == 'number' && length >= 0 && length <= Math.pow(2,53) - 1; // Number.MAX_SAFE_INTEGER\n\n if(isArrayObject) {\n return collection['length'];\n }\n\n i= 0;\n for(var key in collection) {\n if (collection.hasOwnProperty(key)) {\n i++;\n }\n }\n\n return i;\n};\n
\n\n
Data Examples:
\n\n
// case 1\nvar a = new Object();\na[\"firstname\"] = \"Gareth\";\na[\"lastname\"] = \"Simpson\";\na[\"age\"] = 21;\n\n//case 2\nvar b = [1,2,3];\n\n// case 3\nvar c = {};\nc[0] = 1;\nc.two = 2;\n
You can simply use Object.keys(obj).length on any object to get its length. Object.keys returns an array containing all of the object keys (properties) which can come in handy for finding the length of that object using the length of the corresponding array. You can even write a function for this. Let's get creative and write a method for it as well (along with a more convienient getter property):
\n\n
\r\n
\r\n
function objLength(obj)\r\n{\r\n return Object.keys(obj).length;\r\n}\r\n\r\nconsole.log(objLength({a:1, b:\"summit\", c:\"nonsense\"}));\r\n\r\n// Works perfectly fine\r\nvar obj = new Object();\r\nobj['fish'] = 30;\r\nobj['nullified content'] = null;\r\nconsole.log(objLength(obj));\r\n\r\n// It also works your way, which is creating it using the Object constructor\r\nObject.prototype.getLength = function() {\r\n return Object.keys(this).length;\r\n}\r\nconsole.log(obj.getLength());\r\n\r\n// You can also write it as a method, which is more efficient as done so above\r\n\r\nObject.defineProperty(Object.prototype, \"length\", {get:function(){\r\n return Object.keys(this).length;\r\n}});\r\nconsole.log(obj.length);\r\n\r\n// probably the most effictive approach is done so and demonstrated above which sets a getter property called \"length\" for objects which returns the equivalent value of getLength(this) or this.getLength()
var myObject = new Object();\r\nmyObject[\"firstname\"] = \"Gareth\";\r\nmyObject[\"lastname\"] = \"Simpson\";\r\nmyObject[\"age\"] = 21;\r\n\r\nvar size = JSON.stringify(myObject).length;\r\n\r\ndocument.write(size);
\r\n
\r\n
\r\n
\n\n
\r\n
\r\n
JSON.stringify(myObject)
\r\n
\r\n
\r\n
\n"},{"answer_id":50608738,"author":"Mithu A Quayium","author_id":2079914,"author_profile":"https://Stackoverflow.com/users/2079914","pm_score":4,"selected":false,"text":"
The simplest way is like this:
\n
Object.keys(myobject).length\n
\n
Where myobject is the object of what you want the length of.
Object.keys does not return the right result in case of object inheritance. To properly count object properties, including inherited ones, use for-in. For example, by the following function (related question):
\n
var objLength = (o,i=0) => { for(p in o) i++; return i }\n
\n
\r\n
\r\n
var myObject = new Object();\nmyObject[\"firstname\"] = \"Gareth\";\nmyObject[\"lastname\"] = \"Simpson\";\nmyObject[\"age\"] = 21;\n\nvar child = Object.create(myObject);\nchild[\"sex\"] = \"male\";\n\nvar objLength = (o,i=0) => { for(p in o) i++; return i }\n\nconsole.log(\"Object.keys(myObject):\", Object.keys(myObject).length, \"(OK)\");\nconsole.log(\"Object.keys(child) :\", Object.keys(child).length, \"(wrong)\");\nconsole.log(\"objLength(child) :\", objLength(child), \"(OK)\");
I had a similar need to calculate the bandwidth used by objects received over a websocket. Simply finding the length of the Stringified object was enough for me.
\n\n
websocket.on('message', data => {\n dataPerSecond += JSON.stringify(data).length;\n}\n
With the ECMAScript 6 in-built Reflect object, you can easily count the properties of an object:
\n
Reflect.ownKeys(targetObject).length\n
\n
It will give you the length of the target object's own properties (important).
\n
Reflect.ownKeys(target)\n
\n
\n
Returns an array of the target object's own (not inherited) property\nkeys.
\n
\n
Now, what does that mean? To explain this, let's see this example.
\n
function Person(name, age){\n this.name = name;\n this.age = age;\n}\n\nPerson.prototype.getIntro= function() {\n return `${this.name} is ${this.age} years old!!`\n}\n\nlet student = new Person('Anuj', 11);\n\nconsole.log(Reflect.ownKeys(student).length) // 2\nconsole.log(student.getIntro()) // Anuj is 11 years old!!\n
\n
You can see here, it returned only its own properties while the object is still inheriting the property from its parent.
I'm not a JavaScript expert, but it looks like you would have to loop through the elements and count them since Object doesn't have a length method:
\\n\\n
var element_count = 0;\\nfor (e in myArray) { if (myArray.hasOwnProperty(e)) element_count++; }\\n
\\n\\n
@palmsey: In fairness to the OP, the JavaScript documentation actually explicitly refer to using variables of type Object in this manner as \\\"associative arrays\\\".
@palmsey: In fairness to the OP, the JavaScript documentation actually explicitly refer to using variables of type Object in this manner as "associative arrays".
\\n
\\n
\\n
And in fairness to @palmsey he was quite correct. They aren't associative arrays; they're definitely objects :) - doing the job of an associative array. But as regards to the wider point, you definitely seem to have the right of it according to this rather fine article I found:
If anything else has been added to Object .prototype, then the suggested code will fail:
\\n
<script type="text/javascript">\\nObject.prototype.size = function () {\\n var len = this.length ? --this.length : -1;\\n for (var k in this)\\n len++;\\n return len;\\n}\\nObject.prototype.size2 = function () {\\n var len = this.length ? --this.length : -1;\\n for (var k in this)\\n len++;\\n return len;\\n}\\nvar myArray = new Object();\\nmyArray["firstname"] = "Gareth";\\nmyArray["lastname"] = "Simpson";\\nmyArray["age"] = 21;\\nalert("age is " + myArray["age"]);\\nalert("length is " + myArray.size());\\n</script>\\n
\\n
I don't think that answer should be the accepted one as it can't be trusted to work if you have any other code running in the same execution context. To do it in a robust fashion, surely you would need to define the size method within myArray and check for the type of the members as you iterate through them.
Here's an update as of 2016 and widespread deployment of ES5 and beyond. For IE9+ and all other modern ES5+ capable browsers, you can use Object.keys() so the above code just becomes:
\\n
var size = Object.keys(myObj).length;\\n
\\n
This doesn't have to modify any existing prototype since Object.keys() is now built-in.
\\n
Edit: Objects can have symbolic properties that can not be returned via Object.key method. So the answer would be incomplete without mentioning them.
\\n
Symbol type was added to the language to create unique identifiers for object properties. The main benefit of the Symbol type is the prevention of overwrites.
\\n
Object.keys or Object.getOwnPropertyNames does not work for symbolic properties. To return them you need to use Object.getOwnPropertySymbols.
The most robust answer (i.e. that captures the intent of what you're trying to do while causing the fewest bugs) would be:
\\n
\\r\\n
\\r\\n
Object.size = function(obj) {\\n var size = 0,\\n key;\\n for (key in obj) {\\n if (obj.hasOwnProperty(key)) size++;\\n }\\n return size;\\n};\\n\\n// Get the size of an object\\nconst myObj = {}\\nvar size = Object.size(myObj);
\\r\\n
\\r\\n
\\r\\n
\\n
There's a sort of convention in JavaScript that you don't add things to Object.prototype, because it can break enumerations in various libraries. Adding methods to Object is usually safe, though.
To not mess with the prototype or other code, you could build and extend your own object:
\\n\\n
function Hash(){\\n var length=0;\\n this.add = function(key, val){\\n if(this[key] == undefined)\\n {\\n length++;\\n }\\n this[key]=val;\\n }; \\n this.length = function(){\\n return length;\\n };\\n}\\n\\nmyArray = new Hash();\\nmyArray.add(\\\"lastname\\\", \\\"Simpson\\\");\\nmyArray.add(\\\"age\\\", 21);\\nalert(myArray.length()); // will alert 2\\n
\\n\\n
If you always use the add method, the length property will be correct. If you're worried that you or others forget about using it, you could add the property counter which the others have posted to the length method, too.
\\n\\n
Of course, you could always overwrite the methods. But even if you do, your code would probably fail noticeably, making it easy to debug. ;)
For some cases it is better to just store the size in a separate variable. Especially, if you're adding to the array by one element in one place and can easily increment the size. It would obviously work much faster if you need to check the size often.
Here is a completely different solution that will only work in more modern browsers (Internet Explorer 9+, Chrome, Firefox 4+, Opera 11.60+, and Safari 5.1+)
/**\\n * @constructor\\n */\\nAssociativeArray = function () {};\\n\\n// Make the length property work\\nObject.defineProperty(AssociativeArray.prototype, "length", {\\n get: function () {\\n var count = 0;\\n for (var key in this) {\\n if (this.hasOwnProperty(key))\\n count++;\\n }\\n return count;\\n }\\n});\\n
\\n
Now you can use this code as follows...
\\n
var a1 = new AssociativeArray();\\na1["prop1"] = "test";\\na1["prop2"] = 1234;\\na1["prop3"] = "something else";\\nalert("Length of array is " + a1.length);\\n
This is better than the accepted answer because it uses native Object.keys if exists.\\nThus, it is the fastest for all modern browsers.
\\n\\n
if (!Object.keys) {\\n Object.keys = function (obj) {\\n var arr = [],\\n key;\\n for (key in obj) {\\n if (obj.hasOwnProperty(key)) {\\n arr.push(key);\\n }\\n }\\n return arr;\\n };\\n}\\n\\nObject.keys(obj).length;\\n
Like most JavaScript problems, there are many solutions. You could extend the Object that for better or worse works like many other languages' Dictionary (+ first class citizens). Nothing wrong with that, but another option is to construct a new Object that meets your specific needs.
\\n\\n
function uberject(obj){\\n this._count = 0;\\n for(var param in obj){\\n this[param] = obj[param];\\n this._count++;\\n }\\n}\\n\\nuberject.prototype.getLength = function(){\\n return this._count;\\n};\\n\\nvar foo = new uberject({bar:123,baz:456});\\nalert(foo.getLength());\\n
Object.defineProperty(Object.prototype, 'length', {\\n get: function () {\\n var size = 0, key;\\n for (key in this)\\n if (this.hasOwnProperty(key))\\n size++;\\n return size;\\n }\\n});\\n
\\n
Use
\\n
var o = {a: 1, b: 2, c: 3};\\nalert(o.length); // <-- 3\\no['foo'] = 123;\\nalert(o.length); // <-- 4\\n
var myObject = {}; // ... your object goes here.\\n\\n var length = 0;\\n\\n for (var property in myObject) {\\n if (myObject.hasOwnProperty(property)){\\n length += 1;\\n }\\n };\\n\\n console.log(length); // logs 0 in my example.\\n
If you are using AngularJS 1.x you can do things the AngularJS way by creating a filter and using the code from any of the other examples such as the following:
\\n\\n
// Count the elements in an object\\napp.filter('lengthOfObject', function() {\\n return function( obj ) {\\n var size = 0, key;\\n for (key in obj) {\\n if (obj.hasOwnProperty(key)) size++;\\n }\\n return size;\\n }\\n})\\n
If you don't care about supporting Internet Explorer 8 or lower, you can easily get the number of properties in an object by applying the following two steps:
\\n\\n
\\n
Run either Object.keys() to get an array that contains the names of only those properties that are enumerable or Object.getOwnPropertyNames() if you want to also include the names of properties that are not enumerable.
The solution work for many cases and cross browser:
\\n\\n
Code
\\n\\n
var getTotal = function(collection) {\\n\\n var length = collection['length'];\\n var isArrayObject = typeof length == 'number' && length >= 0 && length <= Math.pow(2,53) - 1; // Number.MAX_SAFE_INTEGER\\n\\n if(isArrayObject) {\\n return collection['length'];\\n }\\n\\n i= 0;\\n for(var key in collection) {\\n if (collection.hasOwnProperty(key)) {\\n i++;\\n }\\n }\\n\\n return i;\\n};\\n
\\n\\n
Data Examples:
\\n\\n
// case 1\\nvar a = new Object();\\na[\\\"firstname\\\"] = \\\"Gareth\\\";\\na[\\\"lastname\\\"] = \\\"Simpson\\\";\\na[\\\"age\\\"] = 21;\\n\\n//case 2\\nvar b = [1,2,3];\\n\\n// case 3\\nvar c = {};\\nc[0] = 1;\\nc.two = 2;\\n
You can simply use Object.keys(obj).length on any object to get its length. Object.keys returns an array containing all of the object keys (properties) which can come in handy for finding the length of that object using the length of the corresponding array. You can even write a function for this. Let's get creative and write a method for it as well (along with a more convienient getter property):
\\n\\n
\\r\\n
\\r\\n
function objLength(obj)\\r\\n{\\r\\n return Object.keys(obj).length;\\r\\n}\\r\\n\\r\\nconsole.log(objLength({a:1, b:\\\"summit\\\", c:\\\"nonsense\\\"}));\\r\\n\\r\\n// Works perfectly fine\\r\\nvar obj = new Object();\\r\\nobj['fish'] = 30;\\r\\nobj['nullified content'] = null;\\r\\nconsole.log(objLength(obj));\\r\\n\\r\\n// It also works your way, which is creating it using the Object constructor\\r\\nObject.prototype.getLength = function() {\\r\\n return Object.keys(this).length;\\r\\n}\\r\\nconsole.log(obj.getLength());\\r\\n\\r\\n// You can also write it as a method, which is more efficient as done so above\\r\\n\\r\\nObject.defineProperty(Object.prototype, \\\"length\\\", {get:function(){\\r\\n return Object.keys(this).length;\\r\\n}});\\r\\nconsole.log(obj.length);\\r\\n\\r\\n// probably the most effictive approach is done so and demonstrated above which sets a getter property called \\\"length\\\" for objects which returns the equivalent value of getLength(this) or this.getLength()
Object.keys does not return the right result in case of object inheritance. To properly count object properties, including inherited ones, use for-in. For example, by the following function (related question):
\\n
var objLength = (o,i=0) => { for(p in o) i++; return i }\\n
\\n
\\r\\n
\\r\\n
var myObject = new Object();\\nmyObject[\\\"firstname\\\"] = \\\"Gareth\\\";\\nmyObject[\\\"lastname\\\"] = \\\"Simpson\\\";\\nmyObject[\\\"age\\\"] = 21;\\n\\nvar child = Object.create(myObject);\\nchild[\\\"sex\\\"] = \\\"male\\\";\\n\\nvar objLength = (o,i=0) => { for(p in o) i++; return i }\\n\\nconsole.log(\\\"Object.keys(myObject):\\\", Object.keys(myObject).length, \\\"(OK)\\\");\\nconsole.log(\\\"Object.keys(child) :\\\", Object.keys(child).length, \\\"(wrong)\\\");\\nconsole.log(\\\"objLength(child) :\\\", objLength(child), \\\"(OK)\\\");
I had a similar need to calculate the bandwidth used by objects received over a websocket. Simply finding the length of the Stringified object was enough for me.
\\n\\n
websocket.on('message', data => {\\n dataPerSecond += JSON.stringify(data).length;\\n}\\n
With the ECMAScript 6 in-built Reflect object, you can easily count the properties of an object:
\\n
Reflect.ownKeys(targetObject).length\\n
\\n
It will give you the length of the target object's own properties (important).
\\n
Reflect.ownKeys(target)\\n
\\n
\\n
Returns an array of the target object's own (not inherited) property\\nkeys.
\\n
\\n
Now, what does that mean? To explain this, let's see this example.
\\n
function Person(name, age){\\n this.name = name;\\n this.age = age;\\n}\\n\\nPerson.prototype.getIntro= function() {\\n return `${this.name} is ${this.age} years old!!`\\n}\\n\\nlet student = new Person('Anuj', 11);\\n\\nconsole.log(Reflect.ownKeys(student).length) // 2\\nconsole.log(student.getIntro()) // Anuj is 11 years old!!\\n
\\n
You can see here, it returned only its own properties while the object is still inheriting the property from its parent.
\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/07"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/5223","https://Stackoverflow.com","https://Stackoverflow.com/users/147/"],"string":"[\n \"https://Stackoverflow.com/questions/5223\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/147/\"\n]"},"input":{"kind":"string","value":"I have a JavaScript object. Is there a built-in or accepted best practice way to get the length of this object?\n\n```\nconst myObject = new Object();\nmyObject[\"firstname\"] = \"Gareth\";\nmyObject[\"lastname\"] = \"Simpson\";\nmyObject[\"age\"] = 21;\n\n```"},"output":{"kind":"string","value":"Updated answer\n--------------\n\n**Here's an update as of 2016 and [widespread deployment of ES5](http://kangax.github.io/compat-table/es5/) and beyond.** For IE9+ and all other modern ES5+ capable browsers, you can use [`Object.keys()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/keys) so the above code just becomes:\n\n```\nvar size = Object.keys(myObj).length;\n\n```\n\nThis doesn't have to modify any existing prototype since `Object.keys()` is now built-in.\n\n**Edit**: Objects can have symbolic properties that can not be returned via Object.key method. So the answer would be incomplete without mentioning them.\n\nSymbol type was added to the language to create unique identifiers for object properties. The main benefit of the Symbol type is the prevention of overwrites.\n\n`Object.keys` or `Object.getOwnPropertyNames` does not work for symbolic properties. To return them you need to use `Object.getOwnPropertySymbols`.\n\n```js\nvar person = {\n [Symbol('name')]: 'John Doe',\n [Symbol('age')]: 33,\n \"occupation\": \"Programmer\"\n};\n\nconst propOwn = Object.getOwnPropertyNames(person);\nconsole.log(propOwn.length); // 1\n\nlet propSymb = Object.getOwnPropertySymbols(person);\nconsole.log(propSymb.length); // 2\n```\n\nOlder answer\n------------\n\nThe most robust answer (i.e. that captures the intent of what you're trying to do while causing the fewest bugs) would be:\n\n```js\nObject.size = function(obj) {\n var size = 0,\n key;\n for (key in obj) {\n if (obj.hasOwnProperty(key)) size++;\n }\n return size;\n};\n\n// Get the size of an object\nconst myObj = {}\nvar size = Object.size(myObj);\n```\n\nThere's a sort of convention in JavaScript that you [don't add things to Object.prototype](https://stackoverflow.com/questions/10757455/object-prototype-is-verboten), because it can break enumerations in various libraries. Adding methods to Object is usually safe, though.\n\n---"}}},{"rowIdx":457,"cells":{"qid":{"kind":"number","value":5260,"string":"5,260"},"question":{"kind":"string","value":"
I have a situation where I want to add hours to a date and have the new date wrap around the work-day. I cobbled up a function to determine this new date, but want to make sure that I'm not forgetting anything.
\n\n
The hours to be added is called \"delay\". It could easily be a parameter to the function instead.
\n\n
Please post any suggestions. [VB.NET Warning]
\n\n
Private Function GetDateRequired() As Date\n ''// A decimal representation of the current hour\n Dim hours As Decimal = Decimal.Parse(Date.Now.Hour) + (Decimal.Parse(Date.Now.Minute) / 60.0) \n\n Dim delay As Decimal = 3.0 ''// delay in hours\n Dim endOfDay As Decimal = 12.0 + 5.0 ''// end of day, in hours\n Dim startOfDay As Decimal = 8.0 ''// start of day, in hours\n\n Dim newHour As Integer\n Dim newMinute As Integer\n\n Dim dateRequired As Date = Now\n Dim delta As Decimal = hours + delay\n\n ''// Wrap around to the next day, if necessary\n If delta > endOfDay Then\n delta = delta - endOfDay\n dateRequired = dateRequired.AddDays(1)\n\n newHour = Integer.Parse(Decimal.Truncate(delta))\n newMinute = Integer.Parse(Decimal.Truncate((delta - newHour) * 60))\n newHour = startOfDay + newHour\n Else\n newHour = Integer.Parse(Decimal.Truncate(delta))\n newMinute = Integer.Parse(Decimal.Truncate((delta - newHour) * 60))\n End If\n\n dateRequired = New Date(dateRequired.Year, dateRequired.Month, dateRequired.Day, newHour, newMinute, 0)\n\n Return dateRequired\nEnd Sub\n
\n\n
Note: This will probably not work if delay is more than 9 hours long. It should never change from 3, through.
\n\n
EDIT:\nThe goal is find the date and time that you get as a result of adding several hours to the current time. This is used to determine a default value for a due date of a submission. I want to add 3 hours to the current time to get the due date time. However, I don't want due dates that go beyond 5pm on the current day. So, I tried to have the hours split between (today, up to 5pm) and (tomorrow, from 8am on), such that adding 3 hours to 4pm would give you 19am, because 1 hour is added to the end of today and 2 hours are added to the beginning of tomorrow.
You should probably write some automated tests for each condition you can think of, and then just start brainstorming more, writing the tests as you think of them. This way, you can see for sure it will work, and will continue to work if you make further changes. Look up Test Driven Development if you like the results.
Okay, how about these? The difference between the approaches should speak for themselves.
\n\n
Also, this is tested about as far as I can throw it. The warranty lasts until... now.
\n\n
Hope it helps!
\n\n
Module Module1\n\n Public Function IsInBusinessHours(ByVal d As Date) As Boolean\n Return Not (d.Hour < 8 OrElse d.Hour > 17 OrElse d.DayOfWeek = DayOfWeek.Saturday OrElse d.DayOfWeek = DayOfWeek.Sunday)\n End Function\n\n\n Public Function AddInBusinessHours(ByVal fromDate As Date, ByVal hours As Integer) As Date\n Dim work As Date = fromDate.AddHours(hours)\n While Not IsInBusinessHours(work)\n work = work.AddHours(1)\n End While\n Return work\n End Function\n\n\n Public Function LoopInBusinessHours(ByVal fromDate As Date, ByVal hours As Integer) As Date\n Dim work As Date = fromDate\n While hours > 0\n While hours > 0 AndAlso IsInBusinessHours(work)\n work = work.AddHours(1)\n hours -= 1\n End While\n While Not IsInBusinessHours(work)\n work = work.AddHours(1)\n End While\n End While\n Return work\n End Function\n\n Sub Main()\n Dim test As Date = New Date(2008, 8, 8, 15, 0, 0)\n Dim hours As Integer = 5\n Console.WriteLine(\"Date: \" + test.ToString() + \", \" + hours.ToString())\n Console.WriteLine(\"Just skipping: \" + AddInBusinessHours(test, hours))\n Console.WriteLine(\"Looping: \" + LoopInBusinessHours(test, hours))\n Console.ReadLine()\n End Sub\n\nEnd Module\n
I've worked with the following formula (pseudocode) with some success:
\n\n
now <- number of minutes since the work day started\ndelay <- number of minutes in the delay\nday <- length of a work day in minutes\n\nx <- (now + delay) / day {integer division}\ny <- (now + delay) % day {modulo remainder}\n\nreturn startoftoday + x {in days} + y {in minutes}\n
You should probably write some automated tests for each condition you can think of, and then just start brainstorming more, writing the tests as you think of them. This way, you can see for sure it will work, and will continue to work if you make further changes. Look up Test Driven Development if you like the results.
Okay, how about these? The difference between the approaches should speak for themselves.
\\n\\n
Also, this is tested about as far as I can throw it. The warranty lasts until... now.
\\n\\n
Hope it helps!
\\n\\n
Module Module1\\n\\n Public Function IsInBusinessHours(ByVal d As Date) As Boolean\\n Return Not (d.Hour < 8 OrElse d.Hour > 17 OrElse d.DayOfWeek = DayOfWeek.Saturday OrElse d.DayOfWeek = DayOfWeek.Sunday)\\n End Function\\n\\n\\n Public Function AddInBusinessHours(ByVal fromDate As Date, ByVal hours As Integer) As Date\\n Dim work As Date = fromDate.AddHours(hours)\\n While Not IsInBusinessHours(work)\\n work = work.AddHours(1)\\n End While\\n Return work\\n End Function\\n\\n\\n Public Function LoopInBusinessHours(ByVal fromDate As Date, ByVal hours As Integer) As Date\\n Dim work As Date = fromDate\\n While hours > 0\\n While hours > 0 AndAlso IsInBusinessHours(work)\\n work = work.AddHours(1)\\n hours -= 1\\n End While\\n While Not IsInBusinessHours(work)\\n work = work.AddHours(1)\\n End While\\n End While\\n Return work\\n End Function\\n\\n Sub Main()\\n Dim test As Date = New Date(2008, 8, 8, 15, 0, 0)\\n Dim hours As Integer = 5\\n Console.WriteLine(\\\"Date: \\\" + test.ToString() + \\\", \\\" + hours.ToString())\\n Console.WriteLine(\\\"Just skipping: \\\" + AddInBusinessHours(test, hours))\\n Console.WriteLine(\\\"Looping: \\\" + LoopInBusinessHours(test, hours))\\n Console.ReadLine()\\n End Sub\\n\\nEnd Module\\n
I've worked with the following formula (pseudocode) with some success:
\\n\\n
now <- number of minutes since the work day started\\ndelay <- number of minutes in the delay\\nday <- length of a work day in minutes\\n\\nx <- (now + delay) / day {integer division}\\ny <- (now + delay) % day {modulo remainder}\\n\\nreturn startoftoday + x {in days} + y {in minutes}\\n
\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/07"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/5260","https://Stackoverflow.com","https://Stackoverflow.com/users/106/"],"string":"[\n \"https://Stackoverflow.com/questions/5260\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/106/\"\n]"},"input":{"kind":"string","value":"I have a situation where I want to add hours to a date and have the new date wrap around the work-day. I cobbled up a function to determine this new date, but want to make sure that I'm not forgetting anything.\n\nThe hours to be added is called \"delay\". It could easily be a parameter to the function instead.\n\nPlease post any suggestions. [VB.NET Warning]\n\n```\nPrivate Function GetDateRequired() As Date\n ''// A decimal representation of the current hour\n Dim hours As Decimal = Decimal.Parse(Date.Now.Hour) + (Decimal.Parse(Date.Now.Minute) / 60.0) \n\n Dim delay As Decimal = 3.0 ''// delay in hours\n Dim endOfDay As Decimal = 12.0 + 5.0 ''// end of day, in hours\n Dim startOfDay As Decimal = 8.0 ''// start of day, in hours\n\n Dim newHour As Integer\n Dim newMinute As Integer\n\n Dim dateRequired As Date = Now\n Dim delta As Decimal = hours + delay\n\n ''// Wrap around to the next day, if necessary\n If delta > endOfDay Then\n delta = delta - endOfDay\n dateRequired = dateRequired.AddDays(1)\n\n newHour = Integer.Parse(Decimal.Truncate(delta))\n newMinute = Integer.Parse(Decimal.Truncate((delta - newHour) * 60))\n newHour = startOfDay + newHour\n Else\n newHour = Integer.Parse(Decimal.Truncate(delta))\n newMinute = Integer.Parse(Decimal.Truncate((delta - newHour) * 60))\n End If\n\n dateRequired = New Date(dateRequired.Year, dateRequired.Month, dateRequired.Day, newHour, newMinute, 0)\n\n Return dateRequired\nEnd Sub\n\n```\n\n**Note**: This will probably not work if delay is more than 9 hours long. It should never change from 3, through.\n\nEDIT:\nThe goal is find the date and time that you get as a result of adding several hours to the current time. This is used to determine a default value for a due date of a submission. I want to add 3 hours to the current time to get the due date time. However, I don't want due dates that go beyond 5pm on the current day. So, I tried to have the hours split between (today, up to 5pm) and (tomorrow, from 8am on), such that adding 3 hours to 4pm would give you 19am, because 1 hour is added to the end of today and 2 hours are added to the beginning of tomorrow."},"output":{"kind":"string","value":"You should probably write some automated tests for each condition you can think of, and then just start brainstorming more, writing the tests as you think of them. This way, you can see for sure it will work, and will continue to work if you make further changes. Look up Test Driven Development if you like the results."}}},{"rowIdx":458,"cells":{"qid":{"kind":"number","value":5263,"string":"5,263"},"question":{"kind":"string","value":"
I have a self-referential Role table that represents a tree structure
\n\n
ID [INT] AUTO INCREMENT\nName [VARCHAR]\nParentID [INT]\n
\n\n
I am using an ADO.NET DataTable and DataAdapter to load and save values to this table. This works if I only create children of existing rows. If I make a child row, then make a child of that child, then Update, the temporary ID value generated by the DataTable is going into the ParentID column. I have the following data relation set:
\n"},{"answer_id":24472,"author":"James A. Rosen","author_id":1190,"author_profile":"https://Stackoverflow.com/users/1190","pm_score":3,"selected":true,"text":"
I don't know ADO.net in particular, but most ORMs won't automatically insert the ID of a new record in a relationship. You'll have to resort to the 2-step process:
\n\n
\n
build and save parent
\n
build and save child with relationship to parent
\n
\n\n
The reason that this is difficult for ORMs is because you might have circular dependencies, and it wouldn't know which object it needed to create an ID for first. Some ORMs are smart enough to figure out those relationships where there are no such circular dependencies, but most aren't.
I don't know ADO.net in particular, but most ORMs won't automatically insert the ID of a new record in a relationship. You'll have to resort to the 2-step process:
\\n\\n
\\n
build and save parent
\\n
build and save child with relationship to parent
\\n
\\n\\n
The reason that this is difficult for ORMs is because you might have circular dependencies, and it wouldn't know which object it needed to create an ID for first. Some ORMs are smart enough to figure out those relationships where there are no such circular dependencies, but most aren't.
I suggest you add a ForeignKeyConstraint, with UpdateRule set to Cascade.
\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/07"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/5263","https://Stackoverflow.com","https://Stackoverflow.com/users/173/"],"string":"[\n \"https://Stackoverflow.com/questions/5263\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/173/\"\n]"},"input":{"kind":"string","value":"I have a self-referential Role table that represents a tree structure \n\n```\nID [INT] AUTO INCREMENT\nName [VARCHAR]\nParentID [INT]\n\n```\n\nI am using an ADO.NET DataTable and DataAdapter to load and save values to this table. This works if I only create children of existing rows. If I make a child row, then make a child of that child, then Update, the temporary ID value generated by the DataTable is going into the ParentID column. I have the following data relation set:\n\n```\ndataset.Relations.Add(New DataRelation(\"RoleToRole\",RoleTable.Columns(\"ID\"), RoleTable.Columns(\"ParentID\")))\n\n```\n\nAnd when I make new child rows in the DataTable I call the SetParentRow method\n\n```\nnewRow.SetParentRow(parentRow)\n\n```\n\nIs there something special I have to do to get the ID generation to propagate recursively when I call Update on the DataAdapter?"},"output":{"kind":"string","value":"I don't know ADO.net in particular, but most ORMs won't automatically insert the ID of a new record in a relationship. You'll have to resort to the 2-step process:\n\n1. build and save parent\n2. build and save child with relationship to parent\n\nThe reason that this is difficult for ORMs is because you might have circular dependencies, and it wouldn't know which object it needed to create an ID for first. Some ORMs are smart enough to figure out those relationships where there are no such circular dependencies, but most aren't."}}},{"rowIdx":459,"cells":{"qid":{"kind":"number","value":5269,"string":"5,269"},"question":{"kind":"string","value":"
In C#, (and feel free to answer for other languages), what order does the runtime evaluate a logic statement?
\n\n
Example:
\n\n
DataTable myDt = new DataTable();\nif (myDt != null && myDt.Rows.Count > 0)\n{\n //do some stuff with myDt\n}\n
\n\n
Which statement does the runtime evaluate first -
\n\n
myDt != null\n
\n\n
or:
\n\n
myDt.Rows.Count > 0\n
\n\n
?
\n\n
Is there a time when the compiler would ever evaluate the statement backwards? Perhaps when an \"OR\" operator is involved?
\n\n
\n\n
\n
& is known as a logical bitwise operator and will always evaluate all the sub-expressions
\n
\n\n
What is a good example of when to use the bitwise operator instead of the \"short-circuited boolean\"?
AndAlso operator makes sure that only if the left side was TRUE, the right side will be evaluated (very important, since ifx is nothing x.go will crash)
\n
\n\n
You may use And instead ofAndAlso in vb. in which case the left side gets evaluated first as well, but the right side will get evaluated regardless of result.
\n\n
Best Practice: Always use AndAlso, unless you have a very good reason why not to.
\n\n
\n\n
It was asked in a followup why or when would anyone use And instead of AndAlso (or & instead of &&):\nHere is an example:
\n\n
if ( x.init() And y.init()) then\n x.process(y)\nend \ny.doDance()\n
\n\n
In this case, I want to init both X and Y. Y must be initialized in order for y.DoDance to be able to execute. However, in the init() function I am doing also some extra thing like checking a socket is open, and only if that works out ok, for both, I should go ahead and do the x.process(y).
\n\n
Again, this is probably not needed and not elegant in 99% of the cases, that is why I said that the default should be to use AndAlso.
ZombieSheep is dead-on. The only \"gotcha\" that might be waiting is that this is only true if you are using the && operator. When using the & operator, both expressions will be evaluated every time, regardless if one or both evaluate to false.
\n\n
if (amHungry & whiteCastleIsNearby)\n{\n // The code will check if White Castle is nearby\n // even when I am not hungry\n}\n\nif (amHungry && whiteCastleIsNearby)\n{\n // The code will only check if White Castle is nearby\n // when I am hungry\n}\n
\n"},{"answer_id":5295,"author":"Lasse V. Karlsen","author_id":267,"author_profile":"https://Stackoverflow.com/users/267","pm_score":2,"selected":false,"text":"
Note that there is a difference between && and & regarding how much of your expression is evaluated.
\n\n
&& is known as a short-circuited boolean AND, and will, as noted by others here, stop early if the result can be determined before all the sub-expressions are evaluated.
\n\n
& is known as a logical bitwise operator and will always evaluate all the sub-expressions.
\n\n
As such:
\n\n
if (a() && b())\n
\n\n
Will only call b if a returns true.
\n\n
however, this:
\n\n
if (a() & b())\n
\n\n
Will always call both a and b, even though the result of calling a is false and thus known to be false regardless of the result of calling b.
\n\n
This same difference exists for the || and | operators.
Some languages have interesting situations where expressions are executed in a different order. I am specifically thinking of Ruby, but I'm sure they borrowed it from elsewhere (probably Perl).
\n\n
The expressions in the logic will stay left to right, but for example:
\n\n
puts message unless message.nil?\n
\n\n
The above will evaluate \"message.nil?\" first, then if it evaluates to false (unless is like if except it executes when the condition is false instead of true), \"puts message\" will execute, which prints the contents of the message variable to the screen.
\n\n
It's kind of an interesting way to structure your code sometimes... I personally like to use it for very short 1 liners like the above.
\n\n
Edit:
\n\n
To make it a little clearer, the above is the same as:
The concept modesty is referring to is operator overloading. in the statement:\n ...\n A is evaluated first, if it evaluates to false, B is never evaluated. The same applies to
\n
\n\n
That's not operator overloading. Operator overloading is the term given for letting you define custom behaviour for operators, such as *, +, = and so on.
\n\n
This would let you write your own 'Log' class, and then do
\n\n
a = new Log(); // Log class overloads the + operator\na + \"some string\"; // Call the overloaded method - otherwise this wouldn't work because you can't normally add strings to objects.\n
\n"},{"answer_id":5382,"author":"Lasse V. Karlsen","author_id":267,"author_profile":"https://Stackoverflow.com/users/267","pm_score":0,"selected":false,"text":"
You use & when you specifically want to evaluate all the sub-expressions, most likely because they have side-effects you want, even though the final result will be false and thus not execute your then part of your if-statement.
\n\n
Note that & and | operates for both bitwise masks and boolean values and is not just for bitwise operations. They're called bitwise, but they are defined for both integers and boolean data types in C#.
When things are all in-line, they're executed left-to-right.
\n\n
When things are nested, they're executed inner-to-outer. This may seem confusing as usually what's \"innermost\" is on the right-hand side of the line, so it seems like it's going backwards...
\n\n
For example
\n\n
a = Foo( 5, GetSummary( \"Orion\", GetAddress(\"Orion\") ) );\n
\n\n
Things happen like this:
\n\n
\n
Call GetAddress with the literal \"Orion\"
\n
Call GetSummary with the literal \"Orion\" and the result of GetAddress
\n
Call Foo with the literal 5 and the result of GetSummary
I realise this question has already been answered, but I'd like to throw in another bit of information which is related to the topic.
\n\n
In languages, like C++, where you can actually overload the behaviour of the && and || operators, it is highly recommended that you do not do this. This is because when you overload this behaviour, you end up forcing the evaluation of both sides of the operation. This does two things:
\n\n
\n
It breaks the lazy evaluation mechanism because the overload is a function which has to be invoked, and hence both parameters are evaluated before calling the function.
\n
The order of evaluation of said parameters isn't guaranteed and can be compiler specific. Hence the objects wouldn't behave in the same manner as they do in the examples listed in the question/previous answers.
\n
\n\n
For more info, have a read of Scott Meyers' book, More Effective C++. Cheers!
\n"},{"answer_id":5432,"author":"C. K. Young","author_id":13,"author_profile":"https://Stackoverflow.com/users/13","pm_score":1,"selected":false,"text":"
I like Orion's responses. I'll add two things:
\n\n
\n
The left-to-right still applies first
\n
The inner-to-outer to ensure that all arguments are resolved before calling the function
\n
\n\n
Say we have the following example:
\n\n
a = Foo(5, GetSummary(\"Orion\", GetAddress(\"Orion\")),\n GetSummary(\"Chris\", GetAddress(\"Chris\")));\n
\n\n
Here's the order of execution:
\n\n
\n
GetAddress(\"Orion\")
\n
GetSummary(\"Orion\", ...)
\n
GetAddress(\"Chris\")
\n
GetSummary(\"Chris\", ...)
\n
Foo(...)
\n
Assigns to a
\n
\n\n
I can't speak about C#'s legal requirements (although I did test a similar example using Mono before writing this post), but this order is guaranteed in Java.
\n\n
And just for completeness (since this is a language-agnostic thread as well), there are languages like C and C++, where the order is not guaranteed unless there is a sequence point. References: 1, 2. In answering the thread's question, however, && and || are sequence points in C++ (unless overloaded; also see OJ's excellent answer). So some examples:
\n\n
\n
foo() && bar()
\n
foo() & bar()
\n
\n\n
In the && case, foo() is guaranteed to run before bar() (if the latter is run at all), since && is a sequence point. In the & case, no such guarantee is made (in C and C++), and indeed bar() can run before foo(), or vice versa.
It was asked in a followup why or when would anyone use And instead of AndAlso (or & instead of &&): Here is an example:
\n
if ( x.init() And y.init()) then\n x.process(y)\nend \ny.doDance()\n
\n
In this case, I want to init both X and Y. Y must be initialized in order for y.DoDance to be able to execute. However, in the init() function I am doing also some extra thing like checking a socket is open, and only if that works out ok, for both, I should go ahead and do the x.process(y).
\n
\n
I believe this is rather confusing. Although your example works, it's not the typical case for using And (and I would probably write this differently to make it clearer). And (& in most other languages) is actually the bitwise-and operation. You would use it to calculate bit operations, for example deleting a flag bit or masking and testing flags:
\n
Dim x As Formatting = Formatting.Bold Or Formatting.Italic\nIf (x And Formatting.Italic) = Formatting.Italic Then\n MsgBox("The text will be set in italic.")\nEnd If\n
AndAlso operator makes sure that only if the left side was TRUE, the right side will be evaluated (very important, since ifx is nothing x.go will crash)
\\n
\\n\\n
You may use And instead ofAndAlso in vb. in which case the left side gets evaluated first as well, but the right side will get evaluated regardless of result.
\\n\\n
Best Practice: Always use AndAlso, unless you have a very good reason why not to.
\\n\\n
\\n\\n
It was asked in a followup why or when would anyone use And instead of AndAlso (or & instead of &&):\\nHere is an example:
\\n\\n
if ( x.init() And y.init()) then\\n x.process(y)\\nend \\ny.doDance()\\n
\\n\\n
In this case, I want to init both X and Y. Y must be initialized in order for y.DoDance to be able to execute. However, in the init() function I am doing also some extra thing like checking a socket is open, and only if that works out ok, for both, I should go ahead and do the x.process(y).
\\n\\n
Again, this is probably not needed and not elegant in 99% of the cases, that is why I said that the default should be to use AndAlso.
ZombieSheep is dead-on. The only \\\"gotcha\\\" that might be waiting is that this is only true if you are using the && operator. When using the & operator, both expressions will be evaluated every time, regardless if one or both evaluate to false.
\\n\\n
if (amHungry & whiteCastleIsNearby)\\n{\\n // The code will check if White Castle is nearby\\n // even when I am not hungry\\n}\\n\\nif (amHungry && whiteCastleIsNearby)\\n{\\n // The code will only check if White Castle is nearby\\n // when I am hungry\\n}\\n
Note that there is a difference between && and & regarding how much of your expression is evaluated.
\\n\\n
&& is known as a short-circuited boolean AND, and will, as noted by others here, stop early if the result can be determined before all the sub-expressions are evaluated.
\\n\\n
& is known as a logical bitwise operator and will always evaluate all the sub-expressions.
\\n\\n
As such:
\\n\\n
if (a() && b())\\n
\\n\\n
Will only call b if a returns true.
\\n\\n
however, this:
\\n\\n
if (a() & b())\\n
\\n\\n
Will always call both a and b, even though the result of calling a is false and thus known to be false regardless of the result of calling b.
\\n\\n
This same difference exists for the || and | operators.
Some languages have interesting situations where expressions are executed in a different order. I am specifically thinking of Ruby, but I'm sure they borrowed it from elsewhere (probably Perl).
\\n\\n
The expressions in the logic will stay left to right, but for example:
\\n\\n
puts message unless message.nil?\\n
\\n\\n
The above will evaluate \\\"message.nil?\\\" first, then if it evaluates to false (unless is like if except it executes when the condition is false instead of true), \\\"puts message\\\" will execute, which prints the contents of the message variable to the screen.
\\n\\n
It's kind of an interesting way to structure your code sometimes... I personally like to use it for very short 1 liners like the above.
\\n\\n
Edit:
\\n\\n
To make it a little clearer, the above is the same as:
The concept modesty is referring to is operator overloading. in the statement:\\n ...\\n A is evaluated first, if it evaluates to false, B is never evaluated. The same applies to
\\n
\\n\\n
That's not operator overloading. Operator overloading is the term given for letting you define custom behaviour for operators, such as *, +, = and so on.
\\n\\n
This would let you write your own 'Log' class, and then do
\\n\\n
a = new Log(); // Log class overloads the + operator\\na + \\\"some string\\\"; // Call the overloaded method - otherwise this wouldn't work because you can't normally add strings to objects.\\n
You use & when you specifically want to evaluate all the sub-expressions, most likely because they have side-effects you want, even though the final result will be false and thus not execute your then part of your if-statement.
\\n\\n
Note that & and | operates for both bitwise masks and boolean values and is not just for bitwise operations. They're called bitwise, but they are defined for both integers and boolean data types in C#.
When things are all in-line, they're executed left-to-right.
\\n\\n
When things are nested, they're executed inner-to-outer. This may seem confusing as usually what's \\\"innermost\\\" is on the right-hand side of the line, so it seems like it's going backwards...
\\n\\n
For example
\\n\\n
a = Foo( 5, GetSummary( \\\"Orion\\\", GetAddress(\\\"Orion\\\") ) );\\n
\\n\\n
Things happen like this:
\\n\\n
\\n
Call GetAddress with the literal \\\"Orion\\\"
\\n
Call GetSummary with the literal \\\"Orion\\\" and the result of GetAddress
\\n
Call Foo with the literal 5 and the result of GetSummary
I realise this question has already been answered, but I'd like to throw in another bit of information which is related to the topic.
\\n\\n
In languages, like C++, where you can actually overload the behaviour of the && and || operators, it is highly recommended that you do not do this. This is because when you overload this behaviour, you end up forcing the evaluation of both sides of the operation. This does two things:
\\n\\n
\\n
It breaks the lazy evaluation mechanism because the overload is a function which has to be invoked, and hence both parameters are evaluated before calling the function.
\\n
The order of evaluation of said parameters isn't guaranteed and can be compiler specific. Hence the objects wouldn't behave in the same manner as they do in the examples listed in the question/previous answers.
\\n
\\n\\n
For more info, have a read of Scott Meyers' book, More Effective C++. Cheers!
The inner-to-outer to ensure that all arguments are resolved before calling the function
\\n
\\n\\n
Say we have the following example:
\\n\\n
a = Foo(5, GetSummary(\\\"Orion\\\", GetAddress(\\\"Orion\\\")),\\n GetSummary(\\\"Chris\\\", GetAddress(\\\"Chris\\\")));\\n
\\n\\n
Here's the order of execution:
\\n\\n
\\n
GetAddress(\\\"Orion\\\")
\\n
GetSummary(\\\"Orion\\\", ...)
\\n
GetAddress(\\\"Chris\\\")
\\n
GetSummary(\\\"Chris\\\", ...)
\\n
Foo(...)
\\n
Assigns to a
\\n
\\n\\n
I can't speak about C#'s legal requirements (although I did test a similar example using Mono before writing this post), but this order is guaranteed in Java.
\\n\\n
And just for completeness (since this is a language-agnostic thread as well), there are languages like C and C++, where the order is not guaranteed unless there is a sequence point. References: 1, 2. In answering the thread's question, however, && and || are sequence points in C++ (unless overloaded; also see OJ's excellent answer). So some examples:
\\n\\n
\\n
foo() && bar()
\\n
foo() & bar()
\\n
\\n\\n
In the && case, foo() is guaranteed to run before bar() (if the latter is run at all), since && is a sequence point. In the & case, no such guarantee is made (in C and C++), and indeed bar() can run before foo(), or vice versa.
It was asked in a followup why or when would anyone use And instead of AndAlso (or & instead of &&): Here is an example:
\\n
if ( x.init() And y.init()) then\\n x.process(y)\\nend \\ny.doDance()\\n
\\n
In this case, I want to init both X and Y. Y must be initialized in order for y.DoDance to be able to execute. However, in the init() function I am doing also some extra thing like checking a socket is open, and only if that works out ok, for both, I should go ahead and do the x.process(y).
\\n
\\n
I believe this is rather confusing. Although your example works, it's not the typical case for using And (and I would probably write this differently to make it clearer). And (& in most other languages) is actually the bitwise-and operation. You would use it to calculate bit operations, for example deleting a flag bit or masking and testing flags:
\\n
Dim x As Formatting = Formatting.Bold Or Formatting.Italic\\nIf (x And Formatting.Italic) = Formatting.Italic Then\\n MsgBox("The text will be set in italic.")\\nEnd If\\n
\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/07"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/5269","https://Stackoverflow.com","https://Stackoverflow.com/users/357/"],"string":"[\n \"https://Stackoverflow.com/questions/5269\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/357/\"\n]"},"input":{"kind":"string","value":"In C#, (and feel free to answer for other languages), what order does the runtime evaluate a logic statement?\n\nExample:\n\n```\nDataTable myDt = new DataTable();\nif (myDt != null && myDt.Rows.Count > 0)\n{\n //do some stuff with myDt\n}\n\n```\n\nWhich statement does the runtime evaluate first - \n\n```\nmyDt != null\n\n```\n\nor:\n\n```\nmyDt.Rows.Count > 0\n\n```\n\n?\n\nIs there a time when the compiler would ever evaluate the statement backwards? Perhaps when an \"OR\" operator is involved?\n\n---\n\n> \n> & is known as a logical bitwise operator and will always evaluate all the sub-expressions\n> \n> \n> \n\nWhat is a good example of when to use the bitwise operator instead of the \"short-circuited boolean\"?"},"output":{"kind":"string","value":"C# : Left to right, and processing stops if a non-match (evaluates to false) is found."}}},{"rowIdx":460,"cells":{"qid":{"kind":"number","value":5323,"string":"5,323"},"question":{"kind":"string","value":"
It seems like every time I try to create a pure CSS layout it takes me much longer than if I'd use a table or two. Getting three columns to be equal lengths with different amounts of data seems to require particular fancy hacks, especially when dealing with cross-browser issues.
\n\n
My Question:
\n\n
Who are these few tables going to hurt?
\n\n
Tables seem to work particularly well on tabular data — why are they so reviled in this day and age?
\n\n
Google.com has a table in its source code, so do many other sites (stackoverflow.com does not by the way).
I'm of the thought that CSS layout with as few tables as possible is cleaner and better, but I agree that sometimes you just gotta use a table.
\n\n
Business-wise, it's generally \"what's going to get it done the fastest and most reliable way.\" In my experience, using a few tables generally falls into that category.
\n\n
I have found that a very effective way to mitigate cross-browser differences in CSS rendering is to use the \"strict\" doctype at the top of your page:
\n\n
<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.01//EN\" \"http://www.w3.org/TR/html4/strict.dtd\">\n
\n\n
Also, for the dreaded IE6 CSS issues, you can use this hack:
\n\n
.someClass {\n background-color:black; /*this is for most browsers*/\n _background-color:white; /*this is for IE6 only - all others will ignore it*/\n}\n
Like a lot of things, it's a good idea that often gets carried too far. I like a div+css driven layout because it's usually quite easy to change the appearance, even drastically, just through the stylesheet. It's also nice to be friendly to lower-level browsers, screen readers, etc. But like most decisions in programming, the purpose of the site and the cost of development should be considered in making a decision. Neither side is the right way to go 100% of the time.
\n\n
BTW, I think everyone agrees that tables should be used for tabular data.
Business reason for CSS layout: You can blow away the customers by saying \"our portal is totally customizable/skinnable without writing code!\"
\n\n
Then again, I don't see any evil in designing block elements with tables. By block elements I mean where it doesn't make any sense to break apart the said element in different designs.
\n\n
So, tabular data would best be presented with tables, of course. Designing major building blocks (such as a menu bar, news ticker, etc.) within their own tables should be OK as well. Just don't rely on tables for the overall page layout and you'll be fine, methinks.
The idea is that Designers can Design and Web Developers can implement. This is especially the case in dynamic web applications where you do not want your Designers to mess around in your Source Code.
\n\n
Now, while there are templating engines, Designers apparantly just love to go crazy and CSS allows to pull a lot more stunts than tables.
\n\n
That being said: As a developer, i abandoned CSS Layout mostly because my Design sucks anyway, so at least it can suck properly :-) But if I would ever hire a Designer, I would let him use whatever his WYSIWYG Editor spits out.
Keep your layout and your content separate allows you to redesign or make tweaks and changes to your site easily. It may take a bit longer up front, but the longest phase of software development is maintenance. A css friendly site with clear separation between content and design is best over the course of maintenance.
Using semantic HTML design is one of those things where you don't know what you're missing unless you make a practice of it. I've worked on several sites where the site was restyled after the fact with little or no impact to the server-side code.
\n\n
Restyling sites is a very common request, something that I've noticed more now that I'm able to say \"yes\" to instead of try to talk my way out of.
\n\n
And, once you've learned to work with the page layout system, it's usually no harder than table based layout.
I agree with the maintainability factor. It does take me a bit longer to get my initial layouts done (since I'm still a jedi apprentice in the CSS arts) but doing a complete revamp of a 15 page web site just by updating 1 file is heaven.
Some additional reasons why this is good practice:
\n\n
\n
Accessibility - the web should ideally be\naccessible by all
\n
Performance - save\n bandwidth and load faster on mobile\n devices (these lack bandwidth to some\n degree and cannot layout complex\n tables quickly). Besides loading fast is always a good thing...
In my experience, the only time this really adds business value is when there is a need for 100% support for accessibility. When you have users who are visually impaired and/or use screenreaders to view your site, you need to make sure that your site is compliant to accessibility standards.
\n\n
Users that use screenreaders will tend to have their own high-contrast, large-font stylesheet (if your site doesn't supply one itself) which makes it easy for screenreaders to parse the page.
\n\n
When a screenreader reads a page and sees a table, it'll tell the user it's a table. Hence, if you use a table for layout, it gets very confusing because the user doesn't know that the content of the table is actually the article instead of some other tabular data. A menu should be a list or a collection of divs, not a table with menu items, again that's confusing. You should make sure that you use blockquotes, alt-tags title attributes, etc to make it more readable.
\n\n
If you make your design CSS-driven, then your entire look and feel can be stripped away and replaced with a raw view which is very readable to those users. If you have inline styles, table-based layouts, etc, then you're making it harder for those users to parse your content.
\n\n
While I do feel that maintenance is made easier for some things when your site is purely laid out with CSS, I don't think it's the case for all kinds of maintenance -- especially when you're dealing with cross-browser CSS, which can obviously be a nightmare.
\n\n
In short, your page should describe its make-up in a standards compliant way if you want it to be accessible to said users. If you have no need/requirement and likely won't need it in the future, then don't bother wasting too much time attempting to be a CSS purist :) Use the mixture of style and layout techniques that suits you and makes your job easier.
\n\n
Cheers!
\n\n
[EDIT - added strikethrough to wrong or misleading parts of this answer - see comments]
In the real world, your chances of taking one design and totally reskinning it without touching the markup are pretty remote. It's fine for blogs and concocted demos like the csszengarden, but it's a bogus benefit on any site with a moderately complex design, really. Using a CMS is far more important.
\n\n
DIVs plus CSS != semantic, either. Good HTML is well worthwhile for SEO and accessibility always, whether tables or CSS are used for layout. You get really efficient, fast web designs by combining really simple tables with some good CSS.
\n\n
Table layouts can be more accessible than CSS layouts, and the reverse is also true - it depends TOTALLY on the source order of the content, and just because you avoided tables does not mean users with screen readers will automatically have a good time on your site. Layout tables are irrelevant to screen reader access provided the content makes sense when linearised, exactly the same as if you do CSS layout. Data tables are different; they are really hard to mark up properly and even then the users of screen reader software generally don't know the commands they need to use to understand the data.
\n\n
Rather than agonising over using a few layout tables, you should worry that heading tags and alt text are used properly, and that form labels are properly assigned. Then you'll have a pretty good stab at real world accessibility.
\n\n
This from several years experience running user testing for web accessibility, specialising in accessible site design, and from consulting for Cahoot, an online bank, on this topic for a year.
\n\n
So my answer to the poster is no, there is no business reason to prefer CSS over tables. It's more elegant, more satisfying and more correct, but you as the person building it and the person that has to maintain it after you are the only two people in the world who give a rat's ass whether it's CSS or tables.
When a screenreader reads a page and sees a table, it'll tell the user it's a table. Hence, if you use a table for layout, it gets very confusing because the user doesn't know that the content of the table is actually the article instead of some other tabular data
\n
\n
\n\n
This is actually not true; screen readers like JAWS, Window Eyes and HAL ignore layout tables. They work really well at dealing with the real web.
Which will render the document according to that style when you send it to the printer. This allows you to strip out the background images, additional header/footer information and just print the raw information without creating a separate module.
doing a complete revamp of a 15 page web site just by updating 1 file is heaven.
\n
\n
\n\n
This is true. Unfortunately, having one CSS file used by 15,000 complex and widely differing pages is your worst nightmare come true. Change something - did it break a thousand pages? Who knows?
\n\n
CSS is a double-edged sword on big sites like ours.
If you have a public facing website, the real business case is SEO.
\n\n
Accessibility is important and maintaining semantic (X)HTML is much easier than maintaining table layouts, but that #1 spot on Google will bring home the bacon.
Monthly web report: 127 million page views for July
\n \n
...
\n \n
Latimes.com keeps getting better at SEO (search engine optimization), which means our stories are ranking higher in Google and other search engines. We are also performing better on sites like Digg.com. All that adds up to more exposure and more readership than ever before.
\n
\n\n
If you look at their site, they've got a pretty decent CSS layout going.
\n\n
Generally, you find relatively few table layouts performing well in the SERPs these days.
I don't think there is a business reason at all. Technical reason, maybe, even so, barely - it is a huge timesuck the world over, and then you look at it in IE and break down and weep.
*I would let him use whatever his WYSIWYG Editor spits out \n I just threw-up a little... \n *ahh hello? You don't think the graphic designer is writing the CSS by hand do you?
\n
\n\n
Funnily enough I have worked with a few designers and the best among them do hand-tweak their css. The guy I am thinking of actually does all of his design work as an XHTML file with a couple of CSS files and creates graphical elements on the fly as he needs them. He uses Dreamweaver but only really as a navigation tool. (I learned a lot from that guy)
\n\n
Once you've made an investment to learn purely CSS-based design and have had a little experience (found out where IE sucks [to be fair it's getting better]) it ends up being faster I've found. I worked on Content Management Systems and the application rarely had to change for the designers to come up with a radically different look.
Since this is stackoverflow, I'll give you my programmer's answer
\nsemantics 101\n
First take a look at this code and think about what's wrong here...
\n
class car {\n int wheels = 4;\n string engine;\n}\n\ncar mybike = new car();\nmybike.wheels = 2;\nmybike.engine = null;\n
\n
The problem, of course, is that a bike is not a car. The car class is an inappropriate class for the bike instance. The code is error-free, but is semantically incorrect. It reflects poorly on the programmer.
\nsemantics 102\n
Now apply this to document markup. If your document needs to present tabular data, then the appropriate tag would be <table>. If you place navigation into a table however, then you're misusing the intended purpose of the <table> element. In the second case, you're not presenting tabular data -- you're (mis)using the <table> element to achieve a presentational goal.
\nconclusion\n
Whom does this hurt? No one. Who benefits if you use semantic markup? You -- and your professional reputation. Now go and do the right thing.
I use to design all table based web sites and I was resistant at first, but little by little I moved to CSS. It did not happen overnight, but it happened and it is something you should do as well.
\n\n
There have been some nights I wanted to toss my computer out the window because the style I was applying to a div was not doing what I want, but you learn from those obstacles.
\n\n
As for a business, once you get to designing web sites by CSS down to a science, you can develop processes for each site and even use past web sites and just add a different header graphic, color, etc.
\n\n
Also, be sure to embed/include all reusable parts of your website: header, sub-header, footer.
\n\n
Once you get over the hump, it will be all down hill from there. Good luck!
The main reason why we changed our web pages to DIV/CSS based layout was the delay in rendering table based pages.
\n\n
We have a public web site, with most of its users base is in countries like India, where the internet bandwidth is still an issue (its getting improved day by day, but still not on par). In such circumstances, when we used table based layout, users had to stare at a blank page for considerably long time. Then the entire page will get displayed as a whole in a tick. By converting our pages to DIV, we managed to bring some contents to the browser almost instantly as users entered to our web site, and those contents where enough to get the users engaged till browser downloads entire contents of the page.
\n\n
The major flaw with table based implementation is that, the browser we will show the content of the table only after it downloads the entire html for that table. The issue will blow out when we have a main table which wraps the entire content of the page, and when we have lots of nested tables. For the 'flexible tables' (those without any fixed width), after downloading entire table tag, browser has to parse till the last row of the table to find out the width of each columns, then has to parse it again for displaying the content. Till all these happens users has to stare at a blank screen, then everything will come to screen in a tick.
There definitely is. If you are still striving for it, you are not getting it right.
\n\n
DIV+CSS layout is actually much easier than table layout in terms of maintainability and productivity. Just keep practicing it before it's too early to say that.
\n\n
Table layout is good too it's just not meant for layouts and have exceptional drawbacks when it comes to minor tuning.
I'm of the thought that CSS layout with as few tables as possible is cleaner and better, but I agree that sometimes you just gotta use a table.
\\n\\n
Business-wise, it's generally \\\"what's going to get it done the fastest and most reliable way.\\\" In my experience, using a few tables generally falls into that category.
\\n\\n
I have found that a very effective way to mitigate cross-browser differences in CSS rendering is to use the \\\"strict\\\" doctype at the top of your page:
\\n\\n
<!DOCTYPE HTML PUBLIC \\\"-//W3C//DTD HTML 4.01//EN\\\" \\\"http://www.w3.org/TR/html4/strict.dtd\\\">\\n
\\n\\n
Also, for the dreaded IE6 CSS issues, you can use this hack:
\\n\\n
.someClass {\\n background-color:black; /*this is for most browsers*/\\n _background-color:white; /*this is for IE6 only - all others will ignore it*/\\n}\\n
Like a lot of things, it's a good idea that often gets carried too far. I like a div+css driven layout because it's usually quite easy to change the appearance, even drastically, just through the stylesheet. It's also nice to be friendly to lower-level browsers, screen readers, etc. But like most decisions in programming, the purpose of the site and the cost of development should be considered in making a decision. Neither side is the right way to go 100% of the time.
\\n\\n
BTW, I think everyone agrees that tables should be used for tabular data.
Business reason for CSS layout: You can blow away the customers by saying \\\"our portal is totally customizable/skinnable without writing code!\\\"
\\n\\n
Then again, I don't see any evil in designing block elements with tables. By block elements I mean where it doesn't make any sense to break apart the said element in different designs.
\\n\\n
So, tabular data would best be presented with tables, of course. Designing major building blocks (such as a menu bar, news ticker, etc.) within their own tables should be OK as well. Just don't rely on tables for the overall page layout and you'll be fine, methinks.
The idea is that Designers can Design and Web Developers can implement. This is especially the case in dynamic web applications where you do not want your Designers to mess around in your Source Code.
\\n\\n
Now, while there are templating engines, Designers apparantly just love to go crazy and CSS allows to pull a lot more stunts than tables.
\\n\\n
That being said: As a developer, i abandoned CSS Layout mostly because my Design sucks anyway, so at least it can suck properly :-) But if I would ever hire a Designer, I would let him use whatever his WYSIWYG Editor spits out.
Keep your layout and your content separate allows you to redesign or make tweaks and changes to your site easily. It may take a bit longer up front, but the longest phase of software development is maintenance. A css friendly site with clear separation between content and design is best over the course of maintenance.
Using semantic HTML design is one of those things where you don't know what you're missing unless you make a practice of it. I've worked on several sites where the site was restyled after the fact with little or no impact to the server-side code.
\\n\\n
Restyling sites is a very common request, something that I've noticed more now that I'm able to say \\\"yes\\\" to instead of try to talk my way out of.
\\n\\n
And, once you've learned to work with the page layout system, it's usually no harder than table based layout.
I agree with the maintainability factor. It does take me a bit longer to get my initial layouts done (since I'm still a jedi apprentice in the CSS arts) but doing a complete revamp of a 15 page web site just by updating 1 file is heaven.
Some additional reasons why this is good practice:
\\n\\n
\\n
Accessibility - the web should ideally be\\naccessible by all
\\n
Performance - save\\n bandwidth and load faster on mobile\\n devices (these lack bandwidth to some\\n degree and cannot layout complex\\n tables quickly). Besides loading fast is always a good thing...
In my experience, the only time this really adds business value is when there is a need for 100% support for accessibility. When you have users who are visually impaired and/or use screenreaders to view your site, you need to make sure that your site is compliant to accessibility standards.
\\n\\n
Users that use screenreaders will tend to have their own high-contrast, large-font stylesheet (if your site doesn't supply one itself) which makes it easy for screenreaders to parse the page.
\\n\\n
When a screenreader reads a page and sees a table, it'll tell the user it's a table. Hence, if you use a table for layout, it gets very confusing because the user doesn't know that the content of the table is actually the article instead of some other tabular data. A menu should be a list or a collection of divs, not a table with menu items, again that's confusing. You should make sure that you use blockquotes, alt-tags title attributes, etc to make it more readable.
\\n\\n
If you make your design CSS-driven, then your entire look and feel can be stripped away and replaced with a raw view which is very readable to those users. If you have inline styles, table-based layouts, etc, then you're making it harder for those users to parse your content.
\\n\\n
While I do feel that maintenance is made easier for some things when your site is purely laid out with CSS, I don't think it's the case for all kinds of maintenance -- especially when you're dealing with cross-browser CSS, which can obviously be a nightmare.
\\n\\n
In short, your page should describe its make-up in a standards compliant way if you want it to be accessible to said users. If you have no need/requirement and likely won't need it in the future, then don't bother wasting too much time attempting to be a CSS purist :) Use the mixture of style and layout techniques that suits you and makes your job easier.
\\n\\n
Cheers!
\\n\\n
[EDIT - added strikethrough to wrong or misleading parts of this answer - see comments]
In the real world, your chances of taking one design and totally reskinning it without touching the markup are pretty remote. It's fine for blogs and concocted demos like the csszengarden, but it's a bogus benefit on any site with a moderately complex design, really. Using a CMS is far more important.
\\n\\n
DIVs plus CSS != semantic, either. Good HTML is well worthwhile for SEO and accessibility always, whether tables or CSS are used for layout. You get really efficient, fast web designs by combining really simple tables with some good CSS.
\\n\\n
Table layouts can be more accessible than CSS layouts, and the reverse is also true - it depends TOTALLY on the source order of the content, and just because you avoided tables does not mean users with screen readers will automatically have a good time on your site. Layout tables are irrelevant to screen reader access provided the content makes sense when linearised, exactly the same as if you do CSS layout. Data tables are different; they are really hard to mark up properly and even then the users of screen reader software generally don't know the commands they need to use to understand the data.
\\n\\n
Rather than agonising over using a few layout tables, you should worry that heading tags and alt text are used properly, and that form labels are properly assigned. Then you'll have a pretty good stab at real world accessibility.
\\n\\n
This from several years experience running user testing for web accessibility, specialising in accessible site design, and from consulting for Cahoot, an online bank, on this topic for a year.
\\n\\n
So my answer to the poster is no, there is no business reason to prefer CSS over tables. It's more elegant, more satisfying and more correct, but you as the person building it and the person that has to maintain it after you are the only two people in the world who give a rat's ass whether it's CSS or tables.
When a screenreader reads a page and sees a table, it'll tell the user it's a table. Hence, if you use a table for layout, it gets very confusing because the user doesn't know that the content of the table is actually the article instead of some other tabular data
\\n
\\n
\\n\\n
This is actually not true; screen readers like JAWS, Window Eyes and HAL ignore layout tables. They work really well at dealing with the real web.
Which will render the document according to that style when you send it to the printer. This allows you to strip out the background images, additional header/footer information and just print the raw information without creating a separate module.
doing a complete revamp of a 15 page web site just by updating 1 file is heaven.
\\n
\\n
\\n\\n
This is true. Unfortunately, having one CSS file used by 15,000 complex and widely differing pages is your worst nightmare come true. Change something - did it break a thousand pages? Who knows?
\\n\\n
CSS is a double-edged sword on big sites like ours.
If you have a public facing website, the real business case is SEO.
\\n\\n
Accessibility is important and maintaining semantic (X)HTML is much easier than maintaining table layouts, but that #1 spot on Google will bring home the bacon.
Monthly web report: 127 million page views for July
\\n \\n
...
\\n \\n
Latimes.com keeps getting better at SEO (search engine optimization), which means our stories are ranking higher in Google and other search engines. We are also performing better on sites like Digg.com. All that adds up to more exposure and more readership than ever before.
\\n
\\n\\n
If you look at their site, they've got a pretty decent CSS layout going.
\\n\\n
Generally, you find relatively few table layouts performing well in the SERPs these days.
I don't think there is a business reason at all. Technical reason, maybe, even so, barely - it is a huge timesuck the world over, and then you look at it in IE and break down and weep.
*I would let him use whatever his WYSIWYG Editor spits out \\n I just threw-up a little... \\n *ahh hello? You don't think the graphic designer is writing the CSS by hand do you?
\\n
\\n\\n
Funnily enough I have worked with a few designers and the best among them do hand-tweak their css. The guy I am thinking of actually does all of his design work as an XHTML file with a couple of CSS files and creates graphical elements on the fly as he needs them. He uses Dreamweaver but only really as a navigation tool. (I learned a lot from that guy)
\\n\\n
Once you've made an investment to learn purely CSS-based design and have had a little experience (found out where IE sucks [to be fair it's getting better]) it ends up being faster I've found. I worked on Content Management Systems and the application rarely had to change for the designers to come up with a radically different look.
Since this is stackoverflow, I'll give you my programmer's answer
\\nsemantics 101\\n
First take a look at this code and think about what's wrong here...
\\n
class car {\\n int wheels = 4;\\n string engine;\\n}\\n\\ncar mybike = new car();\\nmybike.wheels = 2;\\nmybike.engine = null;\\n
\\n
The problem, of course, is that a bike is not a car. The car class is an inappropriate class for the bike instance. The code is error-free, but is semantically incorrect. It reflects poorly on the programmer.
\\nsemantics 102\\n
Now apply this to document markup. If your document needs to present tabular data, then the appropriate tag would be <table>. If you place navigation into a table however, then you're misusing the intended purpose of the <table> element. In the second case, you're not presenting tabular data -- you're (mis)using the <table> element to achieve a presentational goal.
\\nconclusion\\n
Whom does this hurt? No one. Who benefits if you use semantic markup? You -- and your professional reputation. Now go and do the right thing.
I use to design all table based web sites and I was resistant at first, but little by little I moved to CSS. It did not happen overnight, but it happened and it is something you should do as well.
\\n\\n
There have been some nights I wanted to toss my computer out the window because the style I was applying to a div was not doing what I want, but you learn from those obstacles.
\\n\\n
As for a business, once you get to designing web sites by CSS down to a science, you can develop processes for each site and even use past web sites and just add a different header graphic, color, etc.
\\n\\n
Also, be sure to embed/include all reusable parts of your website: header, sub-header, footer.
\\n\\n
Once you get over the hump, it will be all down hill from there. Good luck!
The main reason why we changed our web pages to DIV/CSS based layout was the delay in rendering table based pages.
\\n\\n
We have a public web site, with most of its users base is in countries like India, where the internet bandwidth is still an issue (its getting improved day by day, but still not on par). In such circumstances, when we used table based layout, users had to stare at a blank page for considerably long time. Then the entire page will get displayed as a whole in a tick. By converting our pages to DIV, we managed to bring some contents to the browser almost instantly as users entered to our web site, and those contents where enough to get the users engaged till browser downloads entire contents of the page.
\\n\\n
The major flaw with table based implementation is that, the browser we will show the content of the table only after it downloads the entire html for that table. The issue will blow out when we have a main table which wraps the entire content of the page, and when we have lots of nested tables. For the 'flexible tables' (those without any fixed width), after downloading entire table tag, browser has to parse till the last row of the table to find out the width of each columns, then has to parse it again for displaying the content. Till all these happens users has to stare at a blank screen, then everything will come to screen in a tick.
There definitely is. If you are still striving for it, you are not getting it right.
\\n\\n
DIV+CSS layout is actually much easier than table layout in terms of maintainability and productivity. Just keep practicing it before it's too early to say that.
\\n\\n
Table layout is good too it's just not meant for layouts and have exceptional drawbacks when it comes to minor tuning.
\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/07"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/5323","https://Stackoverflow.com","https://Stackoverflow.com/users/556/"],"string":"[\n \"https://Stackoverflow.com/questions/5323\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/556/\"\n]"},"input":{"kind":"string","value":"It seems like every time I try to create a pure CSS layout it takes me much longer than if I'd use a table or two. Getting three columns to be equal lengths with different amounts of data seems to require particular fancy hacks, especially when dealing with cross-browser issues.\n\nMy Question:\n\nWho are these few tables going to hurt? \n\nTables seem to work particularly well on tabular data — why are they so reviled in this day and age?\n\nGoogle.com has a table in its source code, so do many other sites (*stackoverflow.com does not by the way*)."},"output":{"kind":"string","value":"Since this is stack**overflow**, I'll give you my *programmer's answer*\n\nsemantics 101\nFirst take a look at this code and think about what's wrong here...\n\n```\nclass car {\n int wheels = 4;\n string engine;\n}\n\ncar mybike = new car();\nmybike.wheels = 2;\nmybike.engine = null;\n\n```\n\nThe problem, of course, is that a bike is not a car. The car class is an inappropriate class for the bike instance. The code is error-free, but is semantically incorrect. It reflects poorly on the programmer.\n\nsemantics 102\nNow apply this to document markup. If your document needs to present tabular data, then the appropriate tag would be `
`. If you place navigation into a table however, then you're misusing the intended purpose of the `
` element. In the second case, you're not presenting tabular data -- you're (mis)using the `
` element to achieve a presentational goal.\n\nconclusion\nWhom does this hurt? No one. Who benefits if you use semantic markup? You -- and your professional reputation. Now go and do the right thing."}}},{"rowIdx":461,"cells":{"qid":{"kind":"number","value":5328,"string":"5,328"},"question":{"kind":"string","value":"
So, in Java, the first line of your constructor HAS to be a call to super... be it implicitly calling super(), or explicitly calling another constructor. What I want to know is, why can't I put a try block around that?
\n\n
My specific case is that I have a mock class for a test. There is no default constructor, but I want one to make the tests simpler to read. I also want to wrap the exceptions thrown from the constructor into a RuntimeException.
\n\n
So, what I want to do is effectively this:
\n\n
public class MyClassMock extends MyClass {\n public MyClassMock() {\n try {\n super(0);\n } catch (Exception e) {\n throw new RuntimeException(e);\n }\n }\n\n // Mocked methods\n}\n
\n\n
But Java complains that super isn't the first statement.
\n\n
My workaround:
\n\n
public class MyClassMock extends MyClass {\n public static MyClassMock construct() {\n try {\n return new MyClassMock();\n } catch (Exception e) {\n throw new RuntimeException(e);\n }\n }\n\n public MyClassMock() throws Exception {\n super(0);\n }\n\n // Mocked methods\n}\n
\n\n
Is this the best workaround? Why doesn't Java let me do the former?
\n\n\n\n
My best guess as to the \"why\" is that Java doesn't want to let me have a constructed object in a potentially inconsistent state... however, in doing a mock, I don't care about that. It seems I should be able to do the above... or at least I know that the above is safe for my case... or seems as though it should be anyways.
\n\n
I am overriding any methods I use from the tested class, so there is no risk that I am using uninitialized variables.
\n"},"answers":{"kind":"list like","value":[{"answer_id":5351,"author":"Lasse V. Karlsen","author_id":267,"author_profile":"https://Stackoverflow.com/users/267","pm_score":5,"selected":true,"text":"
Unfortunately, compilers can't work on theoretical principles, and even though you may know that it is safe in your case, if they allowed it, it would have to be safe for all cases.
\n\n
In other words, the compiler isn't stopping just you, it's stopping everyone, including all those that don't know that it is unsafe and needs special handling. There are probably other reasons for this as well, as all languages usually have ways to do unsafe things if one knows how to deal with them.
\n\n
In C# .NET there are similar provisions, and the only way to declare a constructor that calls a base constructor is this:
\n\n
public ClassName(...) : base(...)\n
\n\n
in doing so, the base constructor will be called before the body of the constructor, and you cannot change this order.
I don't know how Java is implemented internally, but if the constructor of the superclass throws an exception, then there isn't a instance of the class you extend. It would be impossible to call the toString() or equals() methods, for example, since they are inherited in most cases.
\n\n
Java may allow a try/catch around the super() call in the constructor if 1. you override ALL methods from the superclasses, and 2. you don't use the super.XXX() clause, but that all sounds too complicated to me.
I can't presume to have a deep understanding of Java internals, but it is my understanding that, when a compiler needs to instantiate a derived class, it has to first create the base (and its base before that(...)) and then slap on the extensions made in the subclass.
\n\n
So it is not even the danger of uninited variables or anything like that at all. When you try to do something in the subclass' constructor before the base class' constructor, you are basically asking the compiler to extend a base object instance that doesn't exist yet.
\n\n
Edit:In your case, MyClass becomes the base object, and MyClassMock is a subclass.
I know this is an old question, but I liked it, and as such, I decided to give it an answer of my own. Perhaps my understanding of why this cannot be done will contribute to the discussion and to future readers of your interesting question.
\n\n
Let me start with an example of failing object construction.
\n\n
Let's define a class A, such that:
\n\n
class A {\n private String a = \"A\";\n\n public A() throws Exception {\n throw new Exception();\n }\n}\n
\n\n
Now, let's assume we would like to create an object of type A in a try...catch block.
\n\n
A a = null;\ntry{\n a = new A();\n}catch(Exception e) {\n //...\n}\nSystem.out.println(a);\n
\n\n
Evidently, the output of this code will be: null.
\n\n
Why Java does not return a partially constructed version of A? After all, by the point the constructor fails, the object's name field has already been initialized, right?
\n\n
Well, Java can't return a partially constructed version of A because the object was not successfully built. The object is in a inconsistent state, and it is therefore discarded by Java. Your variable A is not even initialized, it is kept as null.
\n\n
Now, as you know, to fully build a new object, all its super classes must be initialized first. If one of the super classes failed to execute, what would be the final state of the object? It is impossible to determine that.
\n\n
Look at this more elaborate example
\n\n
class A {\n private final int a;\n public A() throws Exception { \n a = 10;\n }\n}\n\nclass B extends A {\n private final int b;\n public B() throws Exception {\n methodThatThrowsException(); \n b = 20;\n }\n}\n\nclass C extends B {\n public C() throws Exception { super(); }\n}\n
\n\n
When the constructor of C is invoked, if an exception occurs while initializing B, what would be the value of the final int variable b?
\n\n
As such, the object C cannot be created, it is bogus, it is trash, it is not fully initialized.
One way to get around it is by calling a private static function. The try-catch can then be placed in the function body.
\n\n
public class Test {\n public Test() {\n this(Test.getObjectThatMightThrowException());\n }\n public Test(Object o) {\n //...\n }\n private static final Object getObjectThatMightThrowException() {\n try {\n return new ObjectThatMightThrowAnException();\n } catch(RuntimeException rtx) {\n throw new RuntimeException(\"It threw an exception!!!\", rtx);\n }\n }\n}\n
I know this question has numerous answers, but I'd like to give my little tidbit on why this wouldn't be allowed, specifically to answer why Java does not allow you to do this. So here you go...
\n\n
Now, keep in mind that super() has to be called before anything else in a subclass's constructor, so, if you did use try and catch blocks around your super() call, the blocks would have to look like this:
\n\n
try {\n super();\n ...\n} catch (Exception e) {\n super(); //This line will throw the same error...\n ...\n}\n
\n\n
If super() fails in the try block, it HAS to be executed first in the catch block, so that super runs before anything in your subclass`s constructor. This leaves you with the same problem you had at the beginning: if an exception is thrown, it isn't caught. (In this case it just gets thrown again in the catch block.)
\n\n
Now, the above code is in no way allowed by Java either. This code may execute half of the first super call, and then call it again, which could cause some problems with some super classes.
\n\n
Now, the reason that Java doesn't let you throw an exception instead of calling super() is because the exception could be caught somewhere else, and the program would continue without calling super() on your subclass object, and possibly because the exception could take your object as a parameter and try to change the value of inherited instance variables, which would not yet have been initialized.
Unfortunately, compilers can't work on theoretical principles, and even though you may know that it is safe in your case, if they allowed it, it would have to be safe for all cases.
\\n\\n
In other words, the compiler isn't stopping just you, it's stopping everyone, including all those that don't know that it is unsafe and needs special handling. There are probably other reasons for this as well, as all languages usually have ways to do unsafe things if one knows how to deal with them.
\\n\\n
In C# .NET there are similar provisions, and the only way to declare a constructor that calls a base constructor is this:
\\n\\n
public ClassName(...) : base(...)\\n
\\n\\n
in doing so, the base constructor will be called before the body of the constructor, and you cannot change this order.
I don't know how Java is implemented internally, but if the constructor of the superclass throws an exception, then there isn't a instance of the class you extend. It would be impossible to call the toString() or equals() methods, for example, since they are inherited in most cases.
\\n\\n
Java may allow a try/catch around the super() call in the constructor if 1. you override ALL methods from the superclasses, and 2. you don't use the super.XXX() clause, but that all sounds too complicated to me.
I can't presume to have a deep understanding of Java internals, but it is my understanding that, when a compiler needs to instantiate a derived class, it has to first create the base (and its base before that(...)) and then slap on the extensions made in the subclass.
\\n\\n
So it is not even the danger of uninited variables or anything like that at all. When you try to do something in the subclass' constructor before the base class' constructor, you are basically asking the compiler to extend a base object instance that doesn't exist yet.
\\n\\n
Edit:In your case, MyClass becomes the base object, and MyClassMock is a subclass.
I know this is an old question, but I liked it, and as such, I decided to give it an answer of my own. Perhaps my understanding of why this cannot be done will contribute to the discussion and to future readers of your interesting question.
\\n\\n
Let me start with an example of failing object construction.
\\n\\n
Let's define a class A, such that:
\\n\\n
class A {\\n private String a = \\\"A\\\";\\n\\n public A() throws Exception {\\n throw new Exception();\\n }\\n}\\n
\\n\\n
Now, let's assume we would like to create an object of type A in a try...catch block.
\\n\\n
A a = null;\\ntry{\\n a = new A();\\n}catch(Exception e) {\\n //...\\n}\\nSystem.out.println(a);\\n
\\n\\n
Evidently, the output of this code will be: null.
\\n\\n
Why Java does not return a partially constructed version of A? After all, by the point the constructor fails, the object's name field has already been initialized, right?
\\n\\n
Well, Java can't return a partially constructed version of A because the object was not successfully built. The object is in a inconsistent state, and it is therefore discarded by Java. Your variable A is not even initialized, it is kept as null.
\\n\\n
Now, as you know, to fully build a new object, all its super classes must be initialized first. If one of the super classes failed to execute, what would be the final state of the object? It is impossible to determine that.
\\n\\n
Look at this more elaborate example
\\n\\n
class A {\\n private final int a;\\n public A() throws Exception { \\n a = 10;\\n }\\n}\\n\\nclass B extends A {\\n private final int b;\\n public B() throws Exception {\\n methodThatThrowsException(); \\n b = 20;\\n }\\n}\\n\\nclass C extends B {\\n public C() throws Exception { super(); }\\n}\\n
\\n\\n
When the constructor of C is invoked, if an exception occurs while initializing B, what would be the value of the final int variable b?
\\n\\n
As such, the object C cannot be created, it is bogus, it is trash, it is not fully initialized.
One way to get around it is by calling a private static function. The try-catch can then be placed in the function body.
\\n\\n
public class Test {\\n public Test() {\\n this(Test.getObjectThatMightThrowException());\\n }\\n public Test(Object o) {\\n //...\\n }\\n private static final Object getObjectThatMightThrowException() {\\n try {\\n return new ObjectThatMightThrowAnException();\\n } catch(RuntimeException rtx) {\\n throw new RuntimeException(\\\"It threw an exception!!!\\\", rtx);\\n }\\n }\\n}\\n
I know this question has numerous answers, but I'd like to give my little tidbit on why this wouldn't be allowed, specifically to answer why Java does not allow you to do this. So here you go...
\\n\\n
Now, keep in mind that super() has to be called before anything else in a subclass's constructor, so, if you did use try and catch blocks around your super() call, the blocks would have to look like this:
\\n\\n
try {\\n super();\\n ...\\n} catch (Exception e) {\\n super(); //This line will throw the same error...\\n ...\\n}\\n
\\n\\n
If super() fails in the try block, it HAS to be executed first in the catch block, so that super runs before anything in your subclass`s constructor. This leaves you with the same problem you had at the beginning: if an exception is thrown, it isn't caught. (In this case it just gets thrown again in the catch block.)
\\n\\n
Now, the above code is in no way allowed by Java either. This code may execute half of the first super call, and then call it again, which could cause some problems with some super classes.
\\n\\n
Now, the reason that Java doesn't let you throw an exception instead of calling super() is because the exception could be caught somewhere else, and the program would continue without calling super() on your subclass object, and possibly because the exception could take your object as a parameter and try to change the value of inherited instance variables, which would not yet have been initialized.
\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/07"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/5328","https://Stackoverflow.com","https://Stackoverflow.com/users/122/"],"string":"[\n \"https://Stackoverflow.com/questions/5328\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/122/\"\n]"},"input":{"kind":"string","value":"So, in Java, the first line of your constructor HAS to be a call to super... be it implicitly calling super(), or explicitly calling another constructor. What I want to know is, why can't I put a try block around that?\n\nMy specific case is that I have a mock class for a test. There is no default constructor, but I want one to make the tests simpler to read. I also want to wrap the exceptions thrown from the constructor into a RuntimeException.\n\nSo, what I want to do is effectively this:\n\n```\npublic class MyClassMock extends MyClass {\n public MyClassMock() {\n try {\n super(0);\n } catch (Exception e) {\n throw new RuntimeException(e);\n }\n }\n\n // Mocked methods\n}\n\n```\n\nBut Java complains that super isn't the first statement.\n\nMy workaround:\n\n```\npublic class MyClassMock extends MyClass {\n public static MyClassMock construct() {\n try {\n return new MyClassMock();\n } catch (Exception e) {\n throw new RuntimeException(e);\n }\n }\n\n public MyClassMock() throws Exception {\n super(0);\n }\n\n // Mocked methods\n}\n\n```\n\nIs this the best workaround? Why doesn't Java let me do the former?\n\n---\n\nMy best guess as to the \"why\" is that Java doesn't want to let me have a constructed object in a potentially inconsistent state... however, in doing a mock, I don't care about that. It seems I should be able to do the above... or at least I know that the above is safe for my case... or seems as though it should be anyways.\n\nI am overriding any methods I use from the tested class, so there is no risk that I am using uninitialized variables."},"output":{"kind":"string","value":"Unfortunately, compilers can't work on theoretical principles, and even though you may know that it is safe in your case, if they allowed it, it would have to be safe for all cases.\n\nIn other words, the compiler isn't stopping just you, it's stopping everyone, including all those that don't know that it is unsafe and needs special handling. There are probably other reasons for this as well, as all languages usually have ways to do *unsafe* things if one knows how to deal with them.\n\nIn C# .NET there are similar provisions, and the only way to declare a constructor that calls a base constructor is this:\n\n```\npublic ClassName(...) : base(...)\n\n```\n\nin doing so, the base constructor will be called before the body of the constructor, and you cannot change this order."}}},{"rowIdx":462,"cells":{"qid":{"kind":"number","value":5374,"string":"5,374"},"question":{"kind":"string","value":"
I would have thought this would be an easy one to Google, but I've been unsucessful.
\n\n
I want to assign a variable the value out of an attribute (easy so far) then use that variable to select another node based on the value of that attribute.
That does not work. If I replace the {@myId} with the value that is in the variable then it does find the right node, but doign it this way produces nothing. I'm sure I'm missing something, or perhaps there is a different way to do it.
\n\n
The context is that there is related data under different top-level nodes that share the same id value so I need to get the related nodes in my template.
Ok, I finally figured it out. Silly problem really, I simply needed to leave out the quotes and the braces. One of those times when I thought that I'd already tried that. :D Oh, and I mistyped @myId in the first example, the code was actually $myId.
You seem to have got confused with use of a variable (which is just $variable) and Attribute Value Templates, which allow you to put any XPath expression in some attributes, e.g.
\n\n
<newElement Id=\"{@Id}\"/>\n
\n\n
They can obviously be combined, so you can include a variable in an Attribute Value Template, such as:
Ok, I finally figured it out. Silly problem really, I simply needed to leave out the quotes and the braces. One of those times when I thought that I'd already tried that. :D Oh, and I mistyped @myId in the first example, the code was actually $myId.
You seem to have got confused with use of a variable (which is just $variable) and Attribute Value Templates, which allow you to put any XPath expression in some attributes, e.g.
\\n\\n
<newElement Id=\\\"{@Id}\\\"/>\\n
\\n\\n
They can obviously be combined, so you can include a variable in an Attribute Value Template, such as:
\\n\\n
<newElement Id=\\\"{$myId}\\\"/>\\n
\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/07"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/5374","https://Stackoverflow.com","https://Stackoverflow.com/users/312/"],"string":"[\n \"https://Stackoverflow.com/questions/5374\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/312/\"\n]"},"input":{"kind":"string","value":"I would have thought this would be an easy one to Google, but I've been unsucessful.\n\nI want to assign a variable the value out of an attribute (easy so far) then use that variable to select another node based on the value of that attribute.\n\nExample:\n\n```\n\n\n\n```\n\nThat does not work. If I replace the {@myId} with the value that is in the variable then it does find the right node, but doign it this way produces nothing. I'm sure I'm missing something, or perhaps there is a different way to do it.\n\nThe context is that there is related data under different top-level nodes that share the same id value so I need to get the related nodes in my template."},"output":{"kind":"string","value":"Ok, I finally figured it out. Silly problem really, I simply needed to leave out the quotes and the braces. One of those times when I *thought* that I'd already tried that. :D Oh, and I mistyped @myId in the first example, the code was actually $myId.\n\n```\n\n\n\n```"}}},{"rowIdx":463,"cells":{"qid":{"kind":"number","value":5415,"string":"5,415"},"question":{"kind":"string","value":"
I have a binary file that I have to parse and I'm using Python. Is there a way to take 4 bytes and convert it to a single precision floating point number?
I would add a comment but I don't have enough reputation.
\\n
Just to add some info. If you have a byte buffer containing X amount of floats, the syntax for unpacking would be:
\\n
struct.unpack('Xf', ...)\\n
\\n
If the values are doubles the unpacking would be:
\\n
struct.unpack('Xd', ...)\\n
\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/07"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/5415","https://Stackoverflow.com","https://Stackoverflow.com/users/680/"],"string":"[\n \"https://Stackoverflow.com/questions/5415\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/680/\"\n]"},"input":{"kind":"string","value":"I have a binary file that I have to parse and I'm using Python. Is there a way to take 4 bytes and convert it to a single precision floating point number?"},"output":{"kind":"string","value":"```\n>>> import struct\n>>> struct.pack('f', 3.141592654)\nb'\\xdb\\x0fI@'\n>>> struct.unpack('f', b'\\xdb\\x0fI@')\n(3.1415927410125732,)\n>>> struct.pack('4f', 1.0, 2.0, 3.0, 4.0)\n'\\x00\\x00\\x80?\\x00\\x00\\x00@\\x00\\x00@@\\x00\\x00\\x80@'\n\n```"}}},{"rowIdx":464,"cells":{"qid":{"kind":"number","value":5419,"string":"5,419"},"question":{"kind":"string","value":"
When I try to print a Unicode string in a Windows console, I get an error .
\n
UnicodeEncodeError: 'charmap' codec can't encode character ....
\n
I assume this is because the Windows console does not accept Unicode-only characters. What's the best way around this?\nIs there any way I can make Python automatically print a ? instead of failing in this situation?
\n
Edit: I'm using Python 2.5.
\n\n
Note: @LasseV.Karlsen answer with the checkmark is sort of outdated (from 2008). Please use the solutions/answers/suggestions below with care!!
\n"},"answers":{"kind":"list like","value":[{"answer_id":5430,"author":"Lasse V. Karlsen","author_id":267,"author_profile":"https://Stackoverflow.com/users/267","pm_score":6,"selected":true,"text":"
Note: This answer is sort of outdated (from 2008). Please use the solution below with care!!
\n\n\n\n
Here is a page that details the problem and a solution (search the page for the text Wrapping sys.stdout into an instance):
The cause of your problem is NOT the Win console not willing to accept Unicode (as it does this since I guess Win2k by default). It is the default system encoding. Try this code and see what it gives you:
\n
import sys\nsys.getdefaultencoding()\n
\n
if it says ascii, there's your cause ;-)\nYou have to create a file called sitecustomize.py and put it under python path (I put it under /usr/lib/python2.5/site-packages, but that is differen on Win - it is c:\\python\\lib\\site-packages or something), with the following contents:
\n
import sys\nsys.setdefaultencoding('utf-8')\n
\n
and perhaps you might want to specify the encoding in your files as well:
The below code will make Python output to console as UTF-8 even on Windows.
\n\n
The console will display the characters well on Windows 7 but on Windows XP it will not display them well, but at least it will work and most important you will have a consistent output from your script on all platforms. You'll be able to redirect the output to a file.
\n\n
Below code was tested with Python 2.6 on Windows.
\n\n
\n#!/usr/bin/python\n# -*- coding: UTF-8 -*-\n\nimport codecs, sys\n\nreload(sys)\nsys.setdefaultencoding('utf-8')\n\nprint sys.getdefaultencoding()\n\nif sys.platform == 'win32':\n try:\n import win32console \n except:\n print \"Python Win32 Extensions module is required.\\n You can download it from https://sourceforge.net/projects/pywin32/ (x86 and x64 builds are available)\\n\"\n exit(-1)\n # win32console implementation of SetConsoleCP does not return a value\n # CP_UTF8 = 65001\n win32console.SetConsoleCP(65001)\n if (win32console.GetConsoleCP() != 65001):\n raise Exception (\"Cannot set console codepage to 65001 (UTF-8)\")\n win32console.SetConsoleOutputCP(65001)\n if (win32console.GetConsoleOutputCP() != 65001):\n raise Exception (\"Cannot set console output codepage to 65001 (UTF-8)\")\n\n#import sys, codecs\nsys.stdout = codecs.getwriter('utf8')(sys.stdout)\nsys.stderr = codecs.getwriter('utf8')(sys.stderr)\n\nprint \"This is an Е乂αmp١ȅ testing Unicode support using Arabic, Latin, Cyrillic, Greek, Hebrew and CJK code points.\\n\"\n
Update: On Python 3.6 or later, printing Unicode strings to the console on Windows just works.
\n
So, upgrade to recent Python and you're done. At this point I recommend using 2to3 to update your code to Python 3.x if needed, and just dropping support for Python 2.x. Note that there has been no security support for any version of Python before 3.7 (including Python 2.7) since December 2021.
\n
If you really still need to support earlier versions of Python (including Python 2.7), you can use https://github.com/Drekin/win-unicode-console , which is based on, and uses the same APIs as the code in the answer that was previously linked here. (That link does include some information on Windows font configuration but I doubt it still applies to Windows 8 or later.)
\n
Note: despite other plausible-sounding answers that suggest changing the code page to 65001, that did not work prior to Python 3.8. (It does kind-of work since then, but as pointed out above, you don't need to do so for Python 3.6+ anyway.) Also, changing the default encoding using sys.setdefaultencoding is (still) not a good idea.
If you're not interested in getting a reliable representation of the bad character(s) you might use something like this (working with python >= 2.6, including 3.x):
\n\n
from __future__ import print_function\nimport sys\n\ndef safeprint(s):\n try:\n print(s)\n except UnicodeEncodeError:\n if sys.version_info >= (3,):\n print(s.encode('utf8').decode(sys.stdout.encoding))\n else:\n print(s.encode('utf8'))\n\nsafeprint(u\"\\N{EM DASH}\")\n
\n\n
The bad character(s) in the string will be converted in a representation which is printable by the Windows console.
I get a UnicodeEncodeError: 'charmap' codec can't encode character... error.
\n
\n
The error means that Unicode characters that you are trying to print can't be represented using the current (chcp) console character encoding. The codepage is often 8-bit encoding such as cp437 that can represent only ~0x100 characters from ~1M Unicode characters:
\n
>>> u\"\\N{EURO SIGN}\".encode('cp437')\nTraceback (most recent call last):\n...\nUnicodeEncodeError: 'charmap' codec can't encode character '\\u20ac' in position 0:\ncharacter maps to
\n
\n
I assume this is because the Windows console does not accept Unicode-only characters. What's the best way around this?
\n
\n
Windows console does accept Unicode characters and it can even display them (BMP only) if the corresponding font is configured. WriteConsoleW() API should be used as suggested in @Daira Hopwood's answer. It can be called transparently i.e., you don't need to and should not modify your scripts if you use win-unicode-console package:
\n
T:\\> py -m pip install win-unicode-console\nT:\\> py -m run your_script.py\n
Is there any way I can make Python\nautomatically print a ? instead of failing in this situation?
\n
\n
If it is enough to replace all unencodable characters with ? in your case then you could set PYTHONIOENCODING envvar:
\n
T:\\> set PYTHONIOENCODING=:replace\nT:\\> python3 -c "print(u'[\\N{EURO SIGN}]')"\n[?]\n
\n
In Python 3.6+, the encoding specified by PYTHONIOENCODING envvar is ignored for interactive console buffers unless PYTHONLEGACYWINDOWSIOENCODING envvar is set to a non-empty string.
Like Giampaolo Rodolà's answer, but even more dirty: I really, really intend to spend a long time (soon) understanding the whole subject of encodings and how they apply to Windoze consoles,
\n\n
For the moment I just wanted sthg which would mean my program would NOT CRASH, and which I understood ... and also which didn't involve importing too many exotic modules (in particular I'm using Jython, so half the time a Python module turns out not in fact to be available).
\n\n
def pr(s):\n try:\n print(s)\n except UnicodeEncodeError:\n for c in s:\n try:\n print( c, end='')\n except UnicodeEncodeError:\n print( '?', end='')\n
\n\n
NB \"pr\" is shorter to type than \"print\" (and quite a bit shorter to type than \"safeprint\")...!
Is there any way I can make Python automatically print a ? instead of failing in this situation?
\n
\n\n
Other solutions recommend we attempt to modify the Windows environment or replace Python's print() function. The answer below comes closer to fulfilling Sulak's request.
\n\n
Under Windows 7, Python 3.5 can be made to print Unicode without throwing a UnicodeEncodeError as follows:
\n\n
In place of:\n print(text) \n substitute:\n print(str(text).encode('utf-8'))
\n\n
Instead of throwing an exception, Python now displays unprintable Unicode characters as \\xNN hex codes, e.g.:
\n\n
Halmalo n\\xe2\\x80\\x99\\xc3\\xa9tait plus qu\\xe2\\x80\\x99un point noir
\n\n
Instead of
\n\n
Halmalo n’était plus qu’un point noir
\n\n
Granted, the latter is preferable ceteris paribus, but otherwise the former is completely accurate for diagnostic messages. Because it displays Unicode as literal byte values the former may also assist in diagnosing encode/decode problems.
\n\n
Note: The str() call above is needed because otherwise encode() causes Python to reject a Unicode character as a tuple of numbers.
Python 3.6 windows7: There is several way to launch a python you could use the python console (which has a python logo on it) or the windows console (it's written cmd.exe on it).
\n\n
I could not print utf8 characters in the windows console. Printing utf-8 characters throw me this error:
\n\n
OSError: [winError 87] The paraneter is incorrect \nException ignored in: (_io-TextIOwrapper name='(stdout)' mode='w' ' encoding='utf8') \nOSError: [WinError 87] The parameter is incorrect \n
\n\n
After trying and failing to understand the answer above I discovered it was only a setting problem. Right click on the top of the cmd console windows, on the tab font chose lucida console.
Nowadays, the Windows console does not encounter this error, unless you redirect the output.
\n
Here is an example Python script scratch_1.py:
\n
s = "∞"\n\nprint(s)\n
\n
If you run the script as follows, everything works as intended:
\n
python scratch_1.py\n
\n
∞\n
\n
However, if you run the following, then you get the same error as in the question:
\n
python scratch_1.py > temp.txt\n
\n
Traceback (most recent call last):\n File "C:\\Users\\Wok\\AppData\\Roaming\\JetBrains\\PyCharmCE2022.2\\scratches\\scratch_1.py", line 3, in <module>\n print(s)\n File "C:\\Users\\Wok\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\encodings\\cp1252.py", line 19, in encode\n return codecs.charmap_encode(input,self.errors,encoding_table)[0]\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nUnicodeEncodeError: 'charmap' codec can't encode character '\\u221e' in position 0: character maps to <undefined>\n
\n\n
To solve this issue with the suggestion present in the original question, i.e. by replacing the erroneous characters with question marks ?, one can proceed as follows:
\n
s = "∞"\n\ntry:\n print(s)\nexcept UnicodeEncodeError:\n output_str = s.encode("ascii", errors="replace").decode("ascii")\n\n print(output_str)\n
\n
It is important:
\n
\n
to call decode(), so that the type of the output is str instead of bytes,
\n
with the same encoding, here "ascii", to avoid the creation of mojibake.
The cause of your problem is NOT the Win console not willing to accept Unicode (as it does this since I guess Win2k by default). It is the default system encoding. Try this code and see what it gives you:
\\n
import sys\\nsys.getdefaultencoding()\\n
\\n
if it says ascii, there's your cause ;-)\\nYou have to create a file called sitecustomize.py and put it under python path (I put it under /usr/lib/python2.5/site-packages, but that is differen on Win - it is c:\\\\python\\\\lib\\\\site-packages or something), with the following contents:
\\n
import sys\\nsys.setdefaultencoding('utf-8')\\n
\\n
and perhaps you might want to specify the encoding in your files as well:
The below code will make Python output to console as UTF-8 even on Windows.
\\n\\n
The console will display the characters well on Windows 7 but on Windows XP it will not display them well, but at least it will work and most important you will have a consistent output from your script on all platforms. You'll be able to redirect the output to a file.
\\n\\n
Below code was tested with Python 2.6 on Windows.
\\n\\n
\\n#!/usr/bin/python\\n# -*- coding: UTF-8 -*-\\n\\nimport codecs, sys\\n\\nreload(sys)\\nsys.setdefaultencoding('utf-8')\\n\\nprint sys.getdefaultencoding()\\n\\nif sys.platform == 'win32':\\n try:\\n import win32console \\n except:\\n print \\\"Python Win32 Extensions module is required.\\\\n You can download it from https://sourceforge.net/projects/pywin32/ (x86 and x64 builds are available)\\\\n\\\"\\n exit(-1)\\n # win32console implementation of SetConsoleCP does not return a value\\n # CP_UTF8 = 65001\\n win32console.SetConsoleCP(65001)\\n if (win32console.GetConsoleCP() != 65001):\\n raise Exception (\\\"Cannot set console codepage to 65001 (UTF-8)\\\")\\n win32console.SetConsoleOutputCP(65001)\\n if (win32console.GetConsoleOutputCP() != 65001):\\n raise Exception (\\\"Cannot set console output codepage to 65001 (UTF-8)\\\")\\n\\n#import sys, codecs\\nsys.stdout = codecs.getwriter('utf8')(sys.stdout)\\nsys.stderr = codecs.getwriter('utf8')(sys.stderr)\\n\\nprint \\\"This is an Е乂αmp١ȅ testing Unicode support using Arabic, Latin, Cyrillic, Greek, Hebrew and CJK code points.\\\\n\\\"\\n
Update: On Python 3.6 or later, printing Unicode strings to the console on Windows just works.
\\n
So, upgrade to recent Python and you're done. At this point I recommend using 2to3 to update your code to Python 3.x if needed, and just dropping support for Python 2.x. Note that there has been no security support for any version of Python before 3.7 (including Python 2.7) since December 2021.
\\n
If you really still need to support earlier versions of Python (including Python 2.7), you can use https://github.com/Drekin/win-unicode-console , which is based on, and uses the same APIs as the code in the answer that was previously linked here. (That link does include some information on Windows font configuration but I doubt it still applies to Windows 8 or later.)
\\n
Note: despite other plausible-sounding answers that suggest changing the code page to 65001, that did not work prior to Python 3.8. (It does kind-of work since then, but as pointed out above, you don't need to do so for Python 3.6+ anyway.) Also, changing the default encoding using sys.setdefaultencoding is (still) not a good idea.
If you're not interested in getting a reliable representation of the bad character(s) you might use something like this (working with python >= 2.6, including 3.x):
\\n\\n
from __future__ import print_function\\nimport sys\\n\\ndef safeprint(s):\\n try:\\n print(s)\\n except UnicodeEncodeError:\\n if sys.version_info >= (3,):\\n print(s.encode('utf8').decode(sys.stdout.encoding))\\n else:\\n print(s.encode('utf8'))\\n\\nsafeprint(u\\\"\\\\N{EM DASH}\\\")\\n
\\n\\n
The bad character(s) in the string will be converted in a representation which is printable by the Windows console.
I get a UnicodeEncodeError: 'charmap' codec can't encode character... error.
\\n
\\n
The error means that Unicode characters that you are trying to print can't be represented using the current (chcp) console character encoding. The codepage is often 8-bit encoding such as cp437 that can represent only ~0x100 characters from ~1M Unicode characters:
\\n
>>> u\\\"\\\\N{EURO SIGN}\\\".encode('cp437')\\nTraceback (most recent call last):\\n...\\nUnicodeEncodeError: 'charmap' codec can't encode character '\\\\u20ac' in position 0:\\ncharacter maps to
\\n
\\n
I assume this is because the Windows console does not accept Unicode-only characters. What's the best way around this?
\\n
\\n
Windows console does accept Unicode characters and it can even display them (BMP only) if the corresponding font is configured. WriteConsoleW() API should be used as suggested in @Daira Hopwood's answer. It can be called transparently i.e., you don't need to and should not modify your scripts if you use win-unicode-console package:
\\n
T:\\\\> py -m pip install win-unicode-console\\nT:\\\\> py -m run your_script.py\\n
Is there any way I can make Python\\nautomatically print a ? instead of failing in this situation?
\\n
\\n
If it is enough to replace all unencodable characters with ? in your case then you could set PYTHONIOENCODING envvar:
\\n
T:\\\\> set PYTHONIOENCODING=:replace\\nT:\\\\> python3 -c "print(u'[\\\\N{EURO SIGN}]')"\\n[?]\\n
\\n
In Python 3.6+, the encoding specified by PYTHONIOENCODING envvar is ignored for interactive console buffers unless PYTHONLEGACYWINDOWSIOENCODING envvar is set to a non-empty string.
Like Giampaolo Rodolà's answer, but even more dirty: I really, really intend to spend a long time (soon) understanding the whole subject of encodings and how they apply to Windoze consoles,
\\n\\n
For the moment I just wanted sthg which would mean my program would NOT CRASH, and which I understood ... and also which didn't involve importing too many exotic modules (in particular I'm using Jython, so half the time a Python module turns out not in fact to be available).
\\n\\n
def pr(s):\\n try:\\n print(s)\\n except UnicodeEncodeError:\\n for c in s:\\n try:\\n print( c, end='')\\n except UnicodeEncodeError:\\n print( '?', end='')\\n
\\n\\n
NB \\\"pr\\\" is shorter to type than \\\"print\\\" (and quite a bit shorter to type than \\\"safeprint\\\")...!
Is there any way I can make Python automatically print a ? instead of failing in this situation?
\\n
\\n\\n
Other solutions recommend we attempt to modify the Windows environment or replace Python's print() function. The answer below comes closer to fulfilling Sulak's request.
\\n\\n
Under Windows 7, Python 3.5 can be made to print Unicode without throwing a UnicodeEncodeError as follows:
\\n\\n
In place of:\\n print(text) \\n substitute:\\n print(str(text).encode('utf-8'))
\\n\\n
Instead of throwing an exception, Python now displays unprintable Unicode characters as \\\\xNN hex codes, e.g.:
\\n\\n
Halmalo n\\\\xe2\\\\x80\\\\x99\\\\xc3\\\\xa9tait plus qu\\\\xe2\\\\x80\\\\x99un point noir
\\n\\n
Instead of
\\n\\n
Halmalo n’était plus qu’un point noir
\\n\\n
Granted, the latter is preferable ceteris paribus, but otherwise the former is completely accurate for diagnostic messages. Because it displays Unicode as literal byte values the former may also assist in diagnosing encode/decode problems.
\\n\\n
Note: The str() call above is needed because otherwise encode() causes Python to reject a Unicode character as a tuple of numbers.
Python 3.6 windows7: There is several way to launch a python you could use the python console (which has a python logo on it) or the windows console (it's written cmd.exe on it).
\\n\\n
I could not print utf8 characters in the windows console. Printing utf-8 characters throw me this error:
\\n\\n
OSError: [winError 87] The paraneter is incorrect \\nException ignored in: (_io-TextIOwrapper name='(stdout)' mode='w' ' encoding='utf8') \\nOSError: [WinError 87] The parameter is incorrect \\n
\\n\\n
After trying and failing to understand the answer above I discovered it was only a setting problem. Right click on the top of the cmd console windows, on the tab font chose lucida console.
Nowadays, the Windows console does not encounter this error, unless you redirect the output.
\\n
Here is an example Python script scratch_1.py:
\\n
s = "∞"\\n\\nprint(s)\\n
\\n
If you run the script as follows, everything works as intended:
\\n
python scratch_1.py\\n
\\n
∞\\n
\\n
However, if you run the following, then you get the same error as in the question:
\\n
python scratch_1.py > temp.txt\\n
\\n
Traceback (most recent call last):\\n File "C:\\\\Users\\\\Wok\\\\AppData\\\\Roaming\\\\JetBrains\\\\PyCharmCE2022.2\\\\scratches\\\\scratch_1.py", line 3, in <module>\\n print(s)\\n File "C:\\\\Users\\\\Wok\\\\AppData\\\\Local\\\\Programs\\\\Python\\\\Python311\\\\Lib\\\\encodings\\\\cp1252.py", line 19, in encode\\n return codecs.charmap_encode(input,self.errors,encoding_table)[0]\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\nUnicodeEncodeError: 'charmap' codec can't encode character '\\\\u221e' in position 0: character maps to <undefined>\\n
\\n\\n
To solve this issue with the suggestion present in the original question, i.e. by replacing the erroneous characters with question marks ?, one can proceed as follows:
\\n
s = "∞"\\n\\ntry:\\n print(s)\\nexcept UnicodeEncodeError:\\n output_str = s.encode("ascii", errors="replace").decode("ascii")\\n\\n print(output_str)\\n
\\n
It is important:
\\n
\\n
to call decode(), so that the type of the output is str instead of bytes,
\\n
with the same encoding, here "ascii", to avoid the creation of mojibake.
\\n
\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/07"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/5419","https://Stackoverflow.com","https://Stackoverflow.com/users/207/"],"string":"[\n \"https://Stackoverflow.com/questions/5419\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/207/\"\n]"},"input":{"kind":"string","value":"When I try to print a Unicode string in a Windows console, I get an error .\n\n`UnicodeEncodeError: 'charmap' codec can't encode character ....`\n\nI assume this is because the Windows console does not accept Unicode-only characters. What's the best way around this?\nIs there any way I can make Python automatically print a `?` instead of failing in this situation?\n\n**Edit:** I'm using Python 2.5.\n\n---\n\n**Note:** @LasseV.Karlsen answer with the checkmark is sort of outdated (from 2008). Please use the solutions/answers/suggestions below with care!!\n\n[@JFSebastian answer](https://stackoverflow.com/a/32176732/610569) is more relevant as of today (6 Jan 2016)."},"output":{"kind":"string","value":"**Note:** This answer is sort of outdated (from 2008). Please use the solution below with care!!\n\n---\n\nHere is a page that details the problem and a solution (search the page for the text *Wrapping sys.stdout into an instance*):\n\n[PrintFails - Python Wiki](http://wiki.python.org/moin/PrintFails)\n\nHere's a code excerpt from that page:\n\n```\n$ python -c 'import sys, codecs, locale; print sys.stdout.encoding; \\\n sys.stdout = codecs.getwriter(locale.getpreferredencoding())(sys.stdout); \\\n line = u\"\\u0411\\n\"; print type(line), len(line); \\\n sys.stdout.write(line); print line'\n UTF-8\n 2\n Б\n Б\n\n $ python -c 'import sys, codecs, locale; print sys.stdout.encoding; \\\n sys.stdout = codecs.getwriter(locale.getpreferredencoding())(sys.stdout); \\\n line = u\"\\u0411\\n\"; print type(line), len(line); \\\n sys.stdout.write(line); print line' | cat\n None\n 2\n Б\n Б\n\n```\n\nThere's some more information on that page, well worth a read."}}},{"rowIdx":465,"cells":{"qid":{"kind":"number","value":5425,"string":"5,425"},"question":{"kind":"string","value":"
I have a page that is generated which inserts an HTML comment near the top of the page. Inside the comment is a *nix-style command.
\n\n
<!-- command --option value --option2 value2 --option3 -->\n
\n\n
This comment breaks the page completely. What is wrong with the comment to cause this to happen, and why is this the case?
Comments at the top of the page before <html> will throw IE into quirks mode, which could explain why the page breaks, if that's where your comment appears.
\n\n
For more information, check out the \"Triggering different rendering modes\" on this wikipedia page
Comments at the top of the page before <html> will throw IE into quirks mode, which could explain why the page breaks, if that's where your comment appears.
\\n\\n
For more information, check out the \\\"Triggering different rendering modes\\\" on this wikipedia page
\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/07"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/5425","https://Stackoverflow.com","https://Stackoverflow.com/users/219/"],"string":"[\n \"https://Stackoverflow.com/questions/5425\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/219/\"\n]"},"input":{"kind":"string","value":"I have a page that is generated which inserts an HTML comment near the top of the page. Inside the comment is a \\*nix-style command.\n\n```\n\n\n```\n\nThis comment breaks the page completely. What is wrong with the comment to cause this to happen, and why is this the case?"},"output":{"kind":"string","value":"Comments in the [XML Spec from the w3.org](http://www.w3.org/TR/REC-xml/#sec-comments) :\n\n> \n> For compatibility, the string \"--\"\n> (double-hyphen) MUST NOT occur within\n> comments.\n> \n> \n>"}}},{"rowIdx":466,"cells":{"qid":{"kind":"number","value":5428,"string":"5,428"},"question":{"kind":"string","value":"
Is it worth learning the convention or is it a bane to readability and maintainability?
\n"},"answers":{"kind":"list like","value":[{"answer_id":5439,"author":"Lasse V. Karlsen","author_id":267,"author_profile":"https://Stackoverflow.com/users/267","pm_score":7,"selected":true,"text":"
Considering that most people that use Hungarian Notation is following the misunderstood version of it, I'd say it's pretty pointless.
\n\n
If you want to use the original definition of it, it might make more sense, but other than that it is mostly syntactic sugar.
\n\n
If you read the Wikipedia article on the subject, you'll find two conflicting notations, Systems Hungarian Notation and Apps Hungarian Notation.
\n\n
The original, good, definition is the Apps Hungarian Notation, but most people use the Systems Hungarian Notation.
\n\n
As an example of the two, consider prefixing variables with l for length, a for area and v for volume.
\n\n
With such notation, the following expression makes sense:
\n\n
int vBox = aBottom * lVerticalSide;\n
\n\n
but this doesn't:
\n\n
int aBottom = lSide1;\n
\n\n
If you're mixing the prefixes, they're to be considered part of the equation, and volume = area * length is fine for a box, but copying a length value into an area variable should raise some red flags.
\n\n
Unfortunately, the other notation is less useful, where people prefix the variable names with the type of the value, like this:
\n\n
int iLength;\nint iVolume;\nint iArea;\n
\n\n
some people use n for number, or i for integer, f for float, s for string etc.
\n\n
The original prefix was meant to be used to spot problems in equations, but has somehow devolved into making the code slightly easier to read since you don't have to go look for the variable declaration. With todays smart editors where you can simply hover over any variable to find the full type, and not just an abbreviation for it, this type of hungarian notation has lost a lot of its meaning.
\n\n
But, you should make up your own mind. All I can say is that I don't use either.
\n\n\n\n
Edit Just to add a short notice, while I don't use Hungarian Notation, I do use a prefix, and it's the underscore. I prefix all private fields of classes with a _ and otherwise spell their names as I would a property, titlecase with the first letter uppercase.
It depends on your language and environment. As a rule I wouldn't use it, unless the development environment you're in makes it hard to find the type of the variable.
\n\n
There's also two different types of Hungarian notation. See Joel's article. I can't find it (his names don't exactly make them easy to find), anyone have a link to the one I mean?
Essentially, type based Hungarian notation, where variables are prefixed with information about their type (e.g. whether an object is a string, a handle, an int, etc.) is mostly useless and generally just adds overhead with very little benefit. This, sadly, is the Hungarian notation most people are familiar with. However, the intent of Hungarian notation as envisioned is to add information on the \"kind\" of data the variable contains. This allows you to partition kinds of data from other kinds of data which shouldn't be allowed to be mixed together except, possibly, through some conversion process. For example, pixel based coordinates vs. coordinates in other units, or unsafe user input versus data from safe sources, etc.
\n\n
Look at it this way, if you find yourself spelunking through code to find out information on a variable then you probably need to adjust your naming scheme to contain that information, this is the essence of the Hungarian convention.
\n\n
Note that an alternative to Hungarian notation is to use more classes to show the intent of variable usage rather than relying on primitive types everywhere. For example, instead of having variable prefixes for unsafe user input, you can have simple string wrapper class for unsafe user input, and a separate wrapper class for safe data. This has the advantage, in strongly typed languages, of having partitioning enforced by the compiler (even in less strongly typed languages you can usually add your own tripwire code) but adds a not insignificant amount of overhead.
I still use Hungarian Notation when it comes to UI elements, where several UI elements are related to a particular object/value, e.g.,
\n\n
lblFirstName for the label object, txtFirstName for the text box. I definitely can't name them both \"FirstName\" even if that is the concern/responsibility of both objects.
I use Hungarian Naming for UI elements like buttons, textboxes and lables. The main benefit is grouping in the Visual Studio Intellisense Popup. If I want to access my lables, I simply start typing lbl.... and Visual Studio will suggest all my lables, nicley grouped together.
\n\n
However, after doing more and more Silverlight and WPF stuff, leveraging data binding, I don't even name all my controls anymore, since I don't have to reference them from code-behind (since there really isn't any codebehind anymore ;)
Sorry to follow up with a question, but does prefixing interfaces with \"I\" qualify as hungarian notation? If that is the case, then yes, a lot of people are using it in the real world. If not, ignore this.
The original prefix was meant to be\n used to spot problems in equations,\n but has somehow devolved into making\n the code slightly easier to read since\n you don't have to go look for the\n variable declaration. With todays\n smart editors where you can simply\n hover over any variable to find the\n full type, and not just an\n abbreviation for it, this type of\n hungarian notation has lost a lot of\n its meaning.
\n
\n\n
I'm breaking the habit a little bit but prefixing with the type can be useful in JavaScript that doesn't have strong variable typing.
Original form (The Right Hungarian Notation :) ) where prefix means type (i.e. length, quantity) of value stored by variable is OK, but not necessary in all type of applications.
\n\n
The popular form (The Wrong Hungarian Notation) where prefix means type (String, int) is useless in most of modern programming languages.
\n\n
Especially with meaningless names like strA. I can't understand we people use meaningless names with long prefixes which gives nothing.
When using a dynamically typed language, I occasionally use Apps Hungarian. For statically typed languages I don't. See my explanation in the other thread.
When I see Hungarian discussion, I'm glad to see people thinking hard about how to make their code clearer, and how to mistakes more visible. That's exactly what we should all be doing!
\n\n
But don't forget that you have some powerful tools at your disposal besides naming.
\n\n
Extract Method If your methods are getting so long that your variable declarations have scrolled off the top of the screen, consider making your methods smaller. (If you have too many methods, consider a new class.)
\n\n
Strong typing If you find that you are taking zip codes stored in an integer variable and assigning them to a shoe size integer variable, consider making a class for zip codes and a class for shoe size. Then your bug will be caught at compile time, instead of requiring careful inspection by a human. When I do this, I usually find a bunch of zip code- and shoe size-specific logic that I've peppered around my code, which I can then move in to my new classes. Suddenly all my code gets clearer, simpler, and protected from certain classes of bugs. Wow.
\n\n
To sum up: yes, think hard about how you use names in code to express your ideas clearly, but also look to the other powerful OO tools you can call on.
I use type based (Systems HN) for components (eg editFirstName, lblStatus etc) as it makes autocomplete work better.
\n\n
I sometimes use App HN for variables where the type infomation is isufficient. Ie fpX indicates a fixed pointed variable (int type, but can't be mixed and matched with an int), rawInput for user strings that haven't been validated etc
I see Hungarian Notation as a way to circumvent the capacity of our short term memories. According to psychologists, we can store approximately 7 plus-or-minus 2 chunks of information. The extra information added by including a prefix helps us by providing more details about the meaning of an identifier even with no other context. In other words, we can guess what a variable is for without seeing how it is used or declared. This can be avoided by applying oo techniques such as encapsulation and the single responsibility principle.
\n\n
I'm unaware of whether or not this has been studied empirically. I would hypothesize that the amount of effort increases dramatically when we try to understand classes with more than nine instance variables or methods with more than 9 local variables.
Isn't scope more important than type these days, e.g.
\n\n
\n
l for local
\n
a for argument
\n
m for member
\n
g for global
\n
etc
\n
\n\n
With modern techniques of refactoring old code, search and replace of a symbol because you changed its type is tedious, the compiler will catch type changes, but often will not catch incorrect use of scope, sensible naming conventions help here.
I don't use a very strict sense of hungarian notation, but I do find myself using it sparing for some common custom objects to help identify them, and also I tend to prefix gui control objects with the type of control that they are. For example, labelFirstName, textFirstName, and buttonSubmit.
Hungarian notation is pointless in type-safe languages. e.g. A common prefix you will see in old Microsoft code is \"lpsz\" which means \"long pointer to a zero-terminated string\". Since the early 1700's we haven't used segmented architectures where short and long pointers exist, the normal string representation in C++ is always zero-terminated, and the compiler is type-safe so won't let us apply non-string operations to the string. Therefore none of this information is of any real use to a programmer - it's just more typing.
\n\n
However, I use a similar idea: prefixes that clarify the usage of a variable.\nThe main ones are:
\n\n
\n
m = member
\n
c = const
\n
s = static
\n
v = volatile
\n
p = pointer (and pp=pointer to pointer, etc)
\n
i = index or iterator
\n
\n\n
These can be combined, so a static member variable which is a pointer would be \"mspName\".
\n\n
Where are these useful?
\n\n
\n
Where the usage is important, it is a good idea to constantly remind the programmer that a variable is (e.g.) a volatile or a pointer
\n
Pointer dereferencing used to do my head in until I used the p prefix. Now it's really easy to know when you have an object (Orange) a pointer to an object (pOrange) or a pointer to a pointer to an object (ppOrange). To dereference an object, just put an asterisk in front of it for each p in its name. Case solved, no more deref bugs!
\n
In constructors I usually find that a parameter name is identical to a member variable's name (e.g. size). I prefer to use \"mSize = size;\" than \"size = theSize\" or \"this.size = size\". It is also much safer: I don't accidentally use \"size = 1\" (setting the parameter) when I meant to say \"mSize = 1\" (setting the member)
\n
In loops, my iterator variables are all meaningful names. Most programmers use \"i\" or \"index\" and then have to make up new meaningless names (\"j\", \"index2\") when they want an inner loop. I use a meaningful name with an i prefix (iHospital, iWard, iPatient) so I always know what an iterator is iterating.
\n
In loops, you can mix several related variables by using the same base name with different prefixes: Orange orange = pOrange[iOrange]; This also means you don't make array indexing errors (pApple[i] looks ok, but write it as pApple[iOrange] and the error is immediately obvious).
\n
Many programmers will use my system without knowing it: by add a lengthy suffix like \"Index\" or \"Ptr\" - there isn't any good reason to use a longer form than a single character IMHO, so I use \"i\" and \"p\". Less typing, more consistent, easier to read.
\n
\n\n
This is a simple system which adds meaningful and useful information to code, and eliminates the possibility of many simple but common programming mistakes.
Being a PHP programmer where it's very loosely typed, I don't make a point to use it. However I will occasionally identify something as an array or as an object depending on the size of the system and the scope of the variable.
Considering that most people that use Hungarian Notation is following the misunderstood version of it, I'd say it's pretty pointless.
\\n\\n
If you want to use the original definition of it, it might make more sense, but other than that it is mostly syntactic sugar.
\\n\\n
If you read the Wikipedia article on the subject, you'll find two conflicting notations, Systems Hungarian Notation and Apps Hungarian Notation.
\\n\\n
The original, good, definition is the Apps Hungarian Notation, but most people use the Systems Hungarian Notation.
\\n\\n
As an example of the two, consider prefixing variables with l for length, a for area and v for volume.
\\n\\n
With such notation, the following expression makes sense:
\\n\\n
int vBox = aBottom * lVerticalSide;\\n
\\n\\n
but this doesn't:
\\n\\n
int aBottom = lSide1;\\n
\\n\\n
If you're mixing the prefixes, they're to be considered part of the equation, and volume = area * length is fine for a box, but copying a length value into an area variable should raise some red flags.
\\n\\n
Unfortunately, the other notation is less useful, where people prefix the variable names with the type of the value, like this:
\\n\\n
int iLength;\\nint iVolume;\\nint iArea;\\n
\\n\\n
some people use n for number, or i for integer, f for float, s for string etc.
\\n\\n
The original prefix was meant to be used to spot problems in equations, but has somehow devolved into making the code slightly easier to read since you don't have to go look for the variable declaration. With todays smart editors where you can simply hover over any variable to find the full type, and not just an abbreviation for it, this type of hungarian notation has lost a lot of its meaning.
\\n\\n
But, you should make up your own mind. All I can say is that I don't use either.
\\n\\n\\n\\n
Edit Just to add a short notice, while I don't use Hungarian Notation, I do use a prefix, and it's the underscore. I prefix all private fields of classes with a _ and otherwise spell their names as I would a property, titlecase with the first letter uppercase.
It depends on your language and environment. As a rule I wouldn't use it, unless the development environment you're in makes it hard to find the type of the variable.
\\n\\n
There's also two different types of Hungarian notation. See Joel's article. I can't find it (his names don't exactly make them easy to find), anyone have a link to the one I mean?
Essentially, type based Hungarian notation, where variables are prefixed with information about their type (e.g. whether an object is a string, a handle, an int, etc.) is mostly useless and generally just adds overhead with very little benefit. This, sadly, is the Hungarian notation most people are familiar with. However, the intent of Hungarian notation as envisioned is to add information on the \\\"kind\\\" of data the variable contains. This allows you to partition kinds of data from other kinds of data which shouldn't be allowed to be mixed together except, possibly, through some conversion process. For example, pixel based coordinates vs. coordinates in other units, or unsafe user input versus data from safe sources, etc.
\\n\\n
Look at it this way, if you find yourself spelunking through code to find out information on a variable then you probably need to adjust your naming scheme to contain that information, this is the essence of the Hungarian convention.
\\n\\n
Note that an alternative to Hungarian notation is to use more classes to show the intent of variable usage rather than relying on primitive types everywhere. For example, instead of having variable prefixes for unsafe user input, you can have simple string wrapper class for unsafe user input, and a separate wrapper class for safe data. This has the advantage, in strongly typed languages, of having partitioning enforced by the compiler (even in less strongly typed languages you can usually add your own tripwire code) but adds a not insignificant amount of overhead.
I still use Hungarian Notation when it comes to UI elements, where several UI elements are related to a particular object/value, e.g.,
\\n\\n
lblFirstName for the label object, txtFirstName for the text box. I definitely can't name them both \\\"FirstName\\\" even if that is the concern/responsibility of both objects.
I use Hungarian Naming for UI elements like buttons, textboxes and lables. The main benefit is grouping in the Visual Studio Intellisense Popup. If I want to access my lables, I simply start typing lbl.... and Visual Studio will suggest all my lables, nicley grouped together.
\\n\\n
However, after doing more and more Silverlight and WPF stuff, leveraging data binding, I don't even name all my controls anymore, since I don't have to reference them from code-behind (since there really isn't any codebehind anymore ;)
Sorry to follow up with a question, but does prefixing interfaces with \\\"I\\\" qualify as hungarian notation? If that is the case, then yes, a lot of people are using it in the real world. If not, ignore this.
The original prefix was meant to be\\n used to spot problems in equations,\\n but has somehow devolved into making\\n the code slightly easier to read since\\n you don't have to go look for the\\n variable declaration. With todays\\n smart editors where you can simply\\n hover over any variable to find the\\n full type, and not just an\\n abbreviation for it, this type of\\n hungarian notation has lost a lot of\\n its meaning.
\\n
\\n\\n
I'm breaking the habit a little bit but prefixing with the type can be useful in JavaScript that doesn't have strong variable typing.
Original form (The Right Hungarian Notation :) ) where prefix means type (i.e. length, quantity) of value stored by variable is OK, but not necessary in all type of applications.
\\n\\n
The popular form (The Wrong Hungarian Notation) where prefix means type (String, int) is useless in most of modern programming languages.
\\n\\n
Especially with meaningless names like strA. I can't understand we people use meaningless names with long prefixes which gives nothing.
When using a dynamically typed language, I occasionally use Apps Hungarian. For statically typed languages I don't. See my explanation in the other thread.
When I see Hungarian discussion, I'm glad to see people thinking hard about how to make their code clearer, and how to mistakes more visible. That's exactly what we should all be doing!
\\n\\n
But don't forget that you have some powerful tools at your disposal besides naming.
\\n\\n
Extract Method If your methods are getting so long that your variable declarations have scrolled off the top of the screen, consider making your methods smaller. (If you have too many methods, consider a new class.)
\\n\\n
Strong typing If you find that you are taking zip codes stored in an integer variable and assigning them to a shoe size integer variable, consider making a class for zip codes and a class for shoe size. Then your bug will be caught at compile time, instead of requiring careful inspection by a human. When I do this, I usually find a bunch of zip code- and shoe size-specific logic that I've peppered around my code, which I can then move in to my new classes. Suddenly all my code gets clearer, simpler, and protected from certain classes of bugs. Wow.
\\n\\n
To sum up: yes, think hard about how you use names in code to express your ideas clearly, but also look to the other powerful OO tools you can call on.
I use type based (Systems HN) for components (eg editFirstName, lblStatus etc) as it makes autocomplete work better.
\\n\\n
I sometimes use App HN for variables where the type infomation is isufficient. Ie fpX indicates a fixed pointed variable (int type, but can't be mixed and matched with an int), rawInput for user strings that haven't been validated etc
I see Hungarian Notation as a way to circumvent the capacity of our short term memories. According to psychologists, we can store approximately 7 plus-or-minus 2 chunks of information. The extra information added by including a prefix helps us by providing more details about the meaning of an identifier even with no other context. In other words, we can guess what a variable is for without seeing how it is used or declared. This can be avoided by applying oo techniques such as encapsulation and the single responsibility principle.
\\n\\n
I'm unaware of whether or not this has been studied empirically. I would hypothesize that the amount of effort increases dramatically when we try to understand classes with more than nine instance variables or methods with more than 9 local variables.
Isn't scope more important than type these days, e.g.
\\n\\n
\\n
l for local
\\n
a for argument
\\n
m for member
\\n
g for global
\\n
etc
\\n
\\n\\n
With modern techniques of refactoring old code, search and replace of a symbol because you changed its type is tedious, the compiler will catch type changes, but often will not catch incorrect use of scope, sensible naming conventions help here.
I don't use a very strict sense of hungarian notation, but I do find myself using it sparing for some common custom objects to help identify them, and also I tend to prefix gui control objects with the type of control that they are. For example, labelFirstName, textFirstName, and buttonSubmit.
Hungarian notation is pointless in type-safe languages. e.g. A common prefix you will see in old Microsoft code is \\\"lpsz\\\" which means \\\"long pointer to a zero-terminated string\\\". Since the early 1700's we haven't used segmented architectures where short and long pointers exist, the normal string representation in C++ is always zero-terminated, and the compiler is type-safe so won't let us apply non-string operations to the string. Therefore none of this information is of any real use to a programmer - it's just more typing.
\\n\\n
However, I use a similar idea: prefixes that clarify the usage of a variable.\\nThe main ones are:
\\n\\n
\\n
m = member
\\n
c = const
\\n
s = static
\\n
v = volatile
\\n
p = pointer (and pp=pointer to pointer, etc)
\\n
i = index or iterator
\\n
\\n\\n
These can be combined, so a static member variable which is a pointer would be \\\"mspName\\\".
\\n\\n
Where are these useful?
\\n\\n
\\n
Where the usage is important, it is a good idea to constantly remind the programmer that a variable is (e.g.) a volatile or a pointer
\\n
Pointer dereferencing used to do my head in until I used the p prefix. Now it's really easy to know when you have an object (Orange) a pointer to an object (pOrange) or a pointer to a pointer to an object (ppOrange). To dereference an object, just put an asterisk in front of it for each p in its name. Case solved, no more deref bugs!
\\n
In constructors I usually find that a parameter name is identical to a member variable's name (e.g. size). I prefer to use \\\"mSize = size;\\\" than \\\"size = theSize\\\" or \\\"this.size = size\\\". It is also much safer: I don't accidentally use \\\"size = 1\\\" (setting the parameter) when I meant to say \\\"mSize = 1\\\" (setting the member)
\\n
In loops, my iterator variables are all meaningful names. Most programmers use \\\"i\\\" or \\\"index\\\" and then have to make up new meaningless names (\\\"j\\\", \\\"index2\\\") when they want an inner loop. I use a meaningful name with an i prefix (iHospital, iWard, iPatient) so I always know what an iterator is iterating.
\\n
In loops, you can mix several related variables by using the same base name with different prefixes: Orange orange = pOrange[iOrange]; This also means you don't make array indexing errors (pApple[i] looks ok, but write it as pApple[iOrange] and the error is immediately obvious).
\\n
Many programmers will use my system without knowing it: by add a lengthy suffix like \\\"Index\\\" or \\\"Ptr\\\" - there isn't any good reason to use a longer form than a single character IMHO, so I use \\\"i\\\" and \\\"p\\\". Less typing, more consistent, easier to read.
\\n
\\n\\n
This is a simple system which adds meaningful and useful information to code, and eliminates the possibility of many simple but common programming mistakes.
Being a PHP programmer where it's very loosely typed, I don't make a point to use it. However I will occasionally identify something as an array or as an object depending on the size of the system and the scope of the variable.
\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/07"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/5428","https://Stackoverflow.com","https://Stackoverflow.com/users/659/"],"string":"[\n \"https://Stackoverflow.com/questions/5428\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/659/\"\n]"},"input":{"kind":"string","value":"Is it worth learning the convention or is it a bane to readability and maintainability?"},"output":{"kind":"string","value":"Considering that most people that use *Hungarian Notation* is following the misunderstood version of it, I'd say it's pretty pointless.\n\nIf you want to use the original definition of it, it might make more sense, but other than that it is mostly syntactic sugar.\n\nIf you read the [Wikipedia article](http://en.wikipedia.org/wiki/Hungarian_notation) on the subject, you'll find two conflicting notations, *Systems Hungarian Notation* and *Apps Hungarian Notation*.\n\nThe original, good, definition is the *Apps Hungarian Notation*, but most people use the *Systems Hungarian Notation*.\n\nAs an example of the two, consider prefixing variables with l for length, a for area and v for volume.\n\nWith such notation, the following expression makes sense:\n\n```\nint vBox = aBottom * lVerticalSide;\n\n```\n\nbut this doesn't:\n\n```\nint aBottom = lSide1;\n\n```\n\nIf you're mixing the prefixes, they're to be considered part of the equation, and volume = area \\* length is fine for a box, but copying a length value into an area variable should raise some red flags.\n\nUnfortunately, the other notation is less useful, where people prefix the variable names with the type of the value, like this:\n\n```\nint iLength;\nint iVolume;\nint iArea;\n\n```\n\nsome people use n for number, or i for integer, f for float, s for string etc.\n\nThe original prefix was meant to be used to spot problems in equations, but has somehow devolved into making the code slightly easier to read since you don't have to go look for the variable declaration. With todays smart editors where you can simply hover over any variable to find the full type, and not just an abbreviation for it, this type of hungarian notation has lost a lot of its meaning.\n\nBut, you should make up your own mind. All I can say is that I don't use either.\n\n---\n\n**Edit** Just to add a short notice, while I don't use *Hungarian Notation*, I do use a prefix, and it's the underscore. I prefix all private fields of classes with a \\_ and otherwise spell their names as I would a property, titlecase with the first letter uppercase."}}},{"rowIdx":467,"cells":{"qid":{"kind":"number","value":5459,"string":"5,459"},"question":{"kind":"string","value":"
This is how I wanted to do it which would work in PHP 5.3.0+
\n\n
<?php\n class MyClass\n {\n const CONSTANT = 'Const var'; \n }\n\n $classname = 'MyClass';\n echo $classname::CONSTANT; // As of PHP 5.3.0\n?>\n
\n\n
But I'm restricted to using PHP 5.2.6. Can anyone think of a simple way to simulate this behavior without instantiating the class?
\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/07"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/5459","https://Stackoverflow.com","https://Stackoverflow.com/users/117/"],"string":"[\n \"https://Stackoverflow.com/questions/5459\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/117/\"\n]"},"input":{"kind":"string","value":"This is how I wanted to do it which would work in PHP 5.3.0+\n\n```\n\n\n```\n\nBut I'm restricted to using PHP 5.2.6. Can anyone think of a simple way to simulate this behavior without instantiating the class?"},"output":{"kind":"string","value":"You can accomplish this without using [`eval`](http://php.net/manual/en/function.eval.php) in pre-5.3 code. Just use the [`constant`](http://php.net/manual/en/function.constant.php) function:\n\n```\n\n\n```"}}},{"rowIdx":468,"cells":{"qid":{"kind":"number","value":5473,"string":"5,473"},"question":{"kind":"string","value":"
Is it possible to undo the changes caused by the following command? If so, how?
What you want to do is to specify the sha1 of the commit you want to restore to. You can get the sha1 by examining the reflog (git reflog) and then doing
\n
git reset --hard <sha1 of desired commit>\n
\n
But don't wait too long... after a few weeks git will eventually see that commit as unreferenced and delete all the blobs.
Pat Notz is correct. You can get the commit back so long as it's been within a few days. git only garbage collects after about a month or so unless you explicitly tell it to remove newer blobs.
\n\n
$ git init\nInitialized empty Git repository in .git/\n\n$ echo \"testing reset\" > file1\n$ git add file1\n$ git commit -m 'added file1'\nCreated initial commit 1a75c1d: added file1\n 1 files changed, 1 insertions(+), 0 deletions(-)\n create mode 100644 file1\n\n$ echo \"added new file\" > file2\n$ git add file2\n$ git commit -m 'added file2'\nCreated commit f6e5064: added file2\n 1 files changed, 1 insertions(+), 0 deletions(-)\n create mode 100644 file2\n\n$ git reset --hard HEAD^\nHEAD is now at 1a75c1d... added file1\n\n$ cat file2\ncat: file2: No such file or directory\n\n$ git reflog\n1a75c1d... HEAD@{0}: reset --hard HEAD^: updating HEAD\nf6e5064... HEAD@{1}: commit: added file2\n\n$ git reset --hard f6e5064\nHEAD is now at f6e5064... added file2\n\n$ cat file2\nadded new file\n
\n\n
You can see in the example that the file2 was removed as a result of the hard reset, but was put back in place when I reset via the reflog.
$ git rebase b72e67a9bb3f1fc1b64528bcce031af4f0d6fcbf\n
\n"},{"answer_id":29469,"author":"Jörg W Mittag","author_id":2988,"author_profile":"https://Stackoverflow.com/users/2988","pm_score":5,"selected":false,"text":"
If you have not yet garbage collected your repository (e.g. using git repack -d or git gc, but note that garbage collection can also happen automatically), then your commit is still there – it's just no longer reachable through the HEAD.
\n\n
You can try to find your commit by looking through the output of git fsck --lost-found.
\n\n
Newer versions of Git have something called the \"reflog\", which is a log of all changes that are made to the refs (as opposed to changes that are made to the repository contents). So, for example, every time you switch your HEAD (i.e. every time you do a git checkout to switch branches) that will be logged. And, of course, your git reset also manipulated the HEAD, so it was also logged. You can access older states of your refs in a similar way that you can access older states of your repository, by using an @ sign instead of a ~, like git reset HEAD@{1}.
\n\n
It took me a while to understand what the difference is between HEAD@{1} and HEAD~1, so here is a little explanation:
\n\n
git init\ngit commit --allow-empty -mOne\ngit commit --allow-empty -mTwo\ngit checkout -b anotherbranch\ngit commit --allow-empty -mThree\ngit checkout master # This changes the HEAD, but not the repository contents\ngit show HEAD~1 # => One\ngit show HEAD@{1} # => Three\ngit reflog\n
\n\n
So, HEAD~1 means \"go to the commit before the commit that HEAD currently points at\", while HEAD@{1} means \"go to the commit that HEAD pointed at before it pointed at where it currently points at\".
\n\n
That will easily allow you to find your lost commit and recover it.
I know this is an old thread... but as many people are searching for ways to undo stuff in Git, I still think it may be a good idea to continue giving tips here.
\n\n
When you do a \"git add\" or move anything from the top left to the bottom left in git gui the content of the file is stored in a blob and the file content is possible to recover from that blob.
\n\n
So it is possible to recover a file even if it was not committed but it has to have been added.
Depending on the state your repository was in when you ran the command, the effects of git reset --hard can range from trivial to undo, to basically impossible.
\n\n
Below I have listed a range of different possible scenarios, and how you might recover from them.
\n\n
All my changes were committed, but now the commits are gone!
\n\n
This situation usually occurs when you run git reset with an argument, as in git reset --hard HEAD~. Don't worry, this is easy to recover from!
\n\n
If you just ran git reset and haven't done anything else since, you can get back to where you were with this one-liner:
\n\n
git reset --hard @{1}\n
\n\n
This resets your current branch whatever state it was in before the last time it was modified (in your case, the most recent modification to the branch would be the hard reset you are trying to undo).
\n\n
If, however, you have made other modifications to your branch since the reset, the one-liner above won't work. Instead, you should run git reflog<branchname> to see a list of all recent changes made to your branch (including resets). That list will look something like this:
\n\n
7c169bd master@{0}: reset: moving to HEAD~\n3ae5027 master@{1}: commit: Changed file2\n7c169bd master@{2}: commit: Some change\n5eb37ca master@{3}: commit (initial): Initial commit\n
\n\n
Find the operation in this list that you want to \"undo\". In the example above, it would be the first line, the one that says \"reset: moving to HEAD~\". Then copy the representation of the commit before (below) that operation. In our case, that would be master@{1} (or 3ae5027, they both represent the same commit), and run git reset --hard <commit> to reset your current branch back to that commit.
\n\n
I staged my changes with git add, but never committed. Now my changes are gone!
\n\n
This is a bit trickier to recover from. git does have copies of the files you added, but since these copies were never tied to any particular commit you can't restore the changes all at once. Instead, you have to locate the individual files in git's database and restore them manually. You can do this using git fsck.
I had changes to files in my working directory that I never staged with git add, and never committed. Now my changes are gone!
\n\n
Uh oh. I hate to tell you this, but you're probably out of luck. git doesn't store changes that you don't add or commit to it, and according to the documentation for git reset:
\n\n
\n
--hard
\n \n
Resets the index and working tree. Any changes to tracked files in the working tree since <commit> are discarded.
\n
\n\n
It's possible that you might be able to recover your changes with some sort of disk recovery utility or a professional data recovery service, but at this point that's probably more trouble than it's worth.
I've just did a hard reset on wrong project. What saved my life was Eclipse's local history. IntelliJ Idea is said to have one, too, and so may your editor, it's worth checking:
Before answering lets add some background, explaining what is this HEAD.
\n
First of all what is HEAD?
\n
HEAD is simply a reference to the current commit (latest) on the current branch. \nThere can only be a single HEAD at any given time. (excluding git worktree)
\n
The content of HEAD is stored inside .git/HEAD and it contains the 40 bytes SHA-1 of the current commit.
\n\n
detached HEAD
\n
If you are not on the latest commit - meaning that HEAD is pointing to a prior commit in history its called detached HEAD.
\n
\n
On the command line it will look like this- SHA-1 instead of the branch name since the HEAD is not pointing to the the tip of the current branch
\n
\n\n
A few options on how to recover from a detached HEAD:
git checkout <commit_id>\ngit checkout -b <new branch> <commit_id>\ngit checkout HEAD~X // x is the number of commits t go back\n
\n
This will checkout new branch pointing to the desired commit. \nThis command will checkout to a given commit. \nAt this point you can create a branch and start to work from this point on.
\n
# Checkout a given commit. \n# Doing so will result in a `detached HEAD` which mean that the `HEAD`\n# is not pointing to the latest so you will need to checkout branch\n# in order to be able to update the code.\ngit checkout <commit-id>\n\n# create a new branch forked to the given commit\ngit checkout -b <branch name>\n
You can always use the reflog as well. \ngit reflog will display any change which updated the HEAD and checking out the desired reflog entry will set the HEAD back to this commit.
\n
Every time the HEAD is modified there will be a new entry in the reflog
"Move" your head back to the desired commit.
\n
# This will destroy any local modifications.\n# Don't do it if you have uncommitted work you want to keep.\ngit reset --hard 0d1d7fc32\n\n# Alternatively, if there's work to keep:\ngit stash\ngit reset --hard 0d1d7fc32\ngit stash pop\n# This saves the modifications, then reapplies that patch after resetting.\n# You could get merge conflicts, if you've modified things which were\n# changed since the commit you reset to.\n
\n
\n
Note: (Since Git 2.7) \nyou can also use the git rebase --no-autostash as well.
"Undo" the given commit or commit range. \nThe reset command will "undo" any changes made in the given commit. \nA new commit with the undo patch will be commited while the original commit will remain in the history as well.
\n
# add new commit with the undo of the original one.\n# the <sha-1> can be any commit(s) or commit range\ngit revert <sha-1>\n
\n\n
This schema illustrate which command does what. \nAs you can see there reset && checkout modify the HEAD.
If you are using a JetBrains IDE (anything IntelliJ based), you can recover even your uncommited changes via their \"Local History\" feature.
\n\n
Right-click on your top-level directory in your file tree, find \"Local History\" in the context menu, and choose \"Show History\". This will open up a view where your recent edits can be found, and once you have found the revision you want to go back to, right click on it and click \"Revert\".
My problem is almost similar. I have uncommitted files before I enter git reset --hard.
\n
Thankfully. I managed to skip all these resources. After I noticed that I can just undo (ctrl-z for windows/linux cmd-shift-z for mac). I just want to add this to all of the answers above.
git reflog and back to the last HEAD\n6a56624 (HEAD -> master) HEAD@{0}: reset: moving to HEAD~3\n1a9bf73 HEAD@{1}: commit: add changes in model generate binary
What you want to do is to specify the sha1 of the commit you want to restore to. You can get the sha1 by examining the reflog (git reflog) and then doing
\\n
git reset --hard <sha1 of desired commit>\\n
\\n
But don't wait too long... after a few weeks git will eventually see that commit as unreferenced and delete all the blobs.
Pat Notz is correct. You can get the commit back so long as it's been within a few days. git only garbage collects after about a month or so unless you explicitly tell it to remove newer blobs.
\\n\\n
$ git init\\nInitialized empty Git repository in .git/\\n\\n$ echo \\\"testing reset\\\" > file1\\n$ git add file1\\n$ git commit -m 'added file1'\\nCreated initial commit 1a75c1d: added file1\\n 1 files changed, 1 insertions(+), 0 deletions(-)\\n create mode 100644 file1\\n\\n$ echo \\\"added new file\\\" > file2\\n$ git add file2\\n$ git commit -m 'added file2'\\nCreated commit f6e5064: added file2\\n 1 files changed, 1 insertions(+), 0 deletions(-)\\n create mode 100644 file2\\n\\n$ git reset --hard HEAD^\\nHEAD is now at 1a75c1d... added file1\\n\\n$ cat file2\\ncat: file2: No such file or directory\\n\\n$ git reflog\\n1a75c1d... HEAD@{0}: reset --hard HEAD^: updating HEAD\\nf6e5064... HEAD@{1}: commit: added file2\\n\\n$ git reset --hard f6e5064\\nHEAD is now at f6e5064... added file2\\n\\n$ cat file2\\nadded new file\\n
\\n\\n
You can see in the example that the file2 was removed as a result of the hard reset, but was put back in place when I reset via the reflog.
If you have not yet garbage collected your repository (e.g. using git repack -d or git gc, but note that garbage collection can also happen automatically), then your commit is still there – it's just no longer reachable through the HEAD.
\\n\\n
You can try to find your commit by looking through the output of git fsck --lost-found.
\\n\\n
Newer versions of Git have something called the \\\"reflog\\\", which is a log of all changes that are made to the refs (as opposed to changes that are made to the repository contents). So, for example, every time you switch your HEAD (i.e. every time you do a git checkout to switch branches) that will be logged. And, of course, your git reset also manipulated the HEAD, so it was also logged. You can access older states of your refs in a similar way that you can access older states of your repository, by using an @ sign instead of a ~, like git reset HEAD@{1}.
\\n\\n
It took me a while to understand what the difference is between HEAD@{1} and HEAD~1, so here is a little explanation:
\\n\\n
git init\\ngit commit --allow-empty -mOne\\ngit commit --allow-empty -mTwo\\ngit checkout -b anotherbranch\\ngit commit --allow-empty -mThree\\ngit checkout master # This changes the HEAD, but not the repository contents\\ngit show HEAD~1 # => One\\ngit show HEAD@{1} # => Three\\ngit reflog\\n
\\n\\n
So, HEAD~1 means \\\"go to the commit before the commit that HEAD currently points at\\\", while HEAD@{1} means \\\"go to the commit that HEAD pointed at before it pointed at where it currently points at\\\".
\\n\\n
That will easily allow you to find your lost commit and recover it.
I know this is an old thread... but as many people are searching for ways to undo stuff in Git, I still think it may be a good idea to continue giving tips here.
\\n\\n
When you do a \\\"git add\\\" or move anything from the top left to the bottom left in git gui the content of the file is stored in a blob and the file content is possible to recover from that blob.
\\n\\n
So it is possible to recover a file even if it was not committed but it has to have been added.
Depending on the state your repository was in when you ran the command, the effects of git reset --hard can range from trivial to undo, to basically impossible.
\\n\\n
Below I have listed a range of different possible scenarios, and how you might recover from them.
\\n\\n
All my changes were committed, but now the commits are gone!
\\n\\n
This situation usually occurs when you run git reset with an argument, as in git reset --hard HEAD~. Don't worry, this is easy to recover from!
\\n\\n
If you just ran git reset and haven't done anything else since, you can get back to where you were with this one-liner:
\\n\\n
git reset --hard @{1}\\n
\\n\\n
This resets your current branch whatever state it was in before the last time it was modified (in your case, the most recent modification to the branch would be the hard reset you are trying to undo).
\\n\\n
If, however, you have made other modifications to your branch since the reset, the one-liner above won't work. Instead, you should run git reflog<branchname> to see a list of all recent changes made to your branch (including resets). That list will look something like this:
\\n\\n
7c169bd master@{0}: reset: moving to HEAD~\\n3ae5027 master@{1}: commit: Changed file2\\n7c169bd master@{2}: commit: Some change\\n5eb37ca master@{3}: commit (initial): Initial commit\\n
\\n\\n
Find the operation in this list that you want to \\\"undo\\\". In the example above, it would be the first line, the one that says \\\"reset: moving to HEAD~\\\". Then copy the representation of the commit before (below) that operation. In our case, that would be master@{1} (or 3ae5027, they both represent the same commit), and run git reset --hard <commit> to reset your current branch back to that commit.
\\n\\n
I staged my changes with git add, but never committed. Now my changes are gone!
\\n\\n
This is a bit trickier to recover from. git does have copies of the files you added, but since these copies were never tied to any particular commit you can't restore the changes all at once. Instead, you have to locate the individual files in git's database and restore them manually. You can do this using git fsck.
I had changes to files in my working directory that I never staged with git add, and never committed. Now my changes are gone!
\\n\\n
Uh oh. I hate to tell you this, but you're probably out of luck. git doesn't store changes that you don't add or commit to it, and according to the documentation for git reset:
\\n\\n
\\n
--hard
\\n \\n
Resets the index and working tree. Any changes to tracked files in the working tree since <commit> are discarded.
\\n
\\n\\n
It's possible that you might be able to recover your changes with some sort of disk recovery utility or a professional data recovery service, but at this point that's probably more trouble than it's worth.
I've just did a hard reset on wrong project. What saved my life was Eclipse's local history. IntelliJ Idea is said to have one, too, and so may your editor, it's worth checking:
Before answering lets add some background, explaining what is this HEAD.
\\n
First of all what is HEAD?
\\n
HEAD is simply a reference to the current commit (latest) on the current branch. \\nThere can only be a single HEAD at any given time. (excluding git worktree)
\\n
The content of HEAD is stored inside .git/HEAD and it contains the 40 bytes SHA-1 of the current commit.
\\n\\n
detached HEAD
\\n
If you are not on the latest commit - meaning that HEAD is pointing to a prior commit in history its called detached HEAD.
\\n
\\n
On the command line it will look like this- SHA-1 instead of the branch name since the HEAD is not pointing to the the tip of the current branch
\\n
\\n\\n
A few options on how to recover from a detached HEAD:
git checkout <commit_id>\\ngit checkout -b <new branch> <commit_id>\\ngit checkout HEAD~X // x is the number of commits t go back\\n
\\n
This will checkout new branch pointing to the desired commit. \\nThis command will checkout to a given commit. \\nAt this point you can create a branch and start to work from this point on.
\\n
# Checkout a given commit. \\n# Doing so will result in a `detached HEAD` which mean that the `HEAD`\\n# is not pointing to the latest so you will need to checkout branch\\n# in order to be able to update the code.\\ngit checkout <commit-id>\\n\\n# create a new branch forked to the given commit\\ngit checkout -b <branch name>\\n
You can always use the reflog as well. \\ngit reflog will display any change which updated the HEAD and checking out the desired reflog entry will set the HEAD back to this commit.
\\n
Every time the HEAD is modified there will be a new entry in the reflog
"Move" your head back to the desired commit.
\\n
# This will destroy any local modifications.\\n# Don't do it if you have uncommitted work you want to keep.\\ngit reset --hard 0d1d7fc32\\n\\n# Alternatively, if there's work to keep:\\ngit stash\\ngit reset --hard 0d1d7fc32\\ngit stash pop\\n# This saves the modifications, then reapplies that patch after resetting.\\n# You could get merge conflicts, if you've modified things which were\\n# changed since the commit you reset to.\\n
\\n
\\n
Note: (Since Git 2.7) \\nyou can also use the git rebase --no-autostash as well.
"Undo" the given commit or commit range. \\nThe reset command will "undo" any changes made in the given commit. \\nA new commit with the undo patch will be commited while the original commit will remain in the history as well.
\\n
# add new commit with the undo of the original one.\\n# the <sha-1> can be any commit(s) or commit range\\ngit revert <sha-1>\\n
\\n\\n
This schema illustrate which command does what. \\nAs you can see there reset && checkout modify the HEAD.
If you are using a JetBrains IDE (anything IntelliJ based), you can recover even your uncommited changes via their \\\"Local History\\\" feature.
\\n\\n
Right-click on your top-level directory in your file tree, find \\\"Local History\\\" in the context menu, and choose \\\"Show History\\\". This will open up a view where your recent edits can be found, and once you have found the revision you want to go back to, right click on it and click \\\"Revert\\\".
My problem is almost similar. I have uncommitted files before I enter git reset --hard.
\\n
Thankfully. I managed to skip all these resources. After I noticed that I can just undo (ctrl-z for windows/linux cmd-shift-z for mac). I just want to add this to all of the answers above.
git reflog and back to the last HEAD\\n6a56624 (HEAD -> master) HEAD@{0}: reset: moving to HEAD~3\\n1a9bf73 HEAD@{1}: commit: add changes in model generate binary
git reset --hard - you can use to revert one page and after that you can stash or pull everything from origin again
\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/07"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/5473","https://Stackoverflow.com","https://Stackoverflow.com/users/85/"],"string":"[\n \"https://Stackoverflow.com/questions/5473\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/85/\"\n]"},"input":{"kind":"string","value":"Is it possible to undo the changes caused by the following command? If so, how?\n\n```\ngit reset --hard HEAD~1\n\n```"},"output":{"kind":"string","value":"Pat Notz is correct. You can get the commit back so long as it's been within a few days. git only garbage collects after about a month or so unless you explicitly tell it to remove newer blobs.\n\n```\n$ git init\nInitialized empty Git repository in .git/\n\n$ echo \"testing reset\" > file1\n$ git add file1\n$ git commit -m 'added file1'\nCreated initial commit 1a75c1d: added file1\n 1 files changed, 1 insertions(+), 0 deletions(-)\n create mode 100644 file1\n\n$ echo \"added new file\" > file2\n$ git add file2\n$ git commit -m 'added file2'\nCreated commit f6e5064: added file2\n 1 files changed, 1 insertions(+), 0 deletions(-)\n create mode 100644 file2\n\n$ git reset --hard HEAD^\nHEAD is now at 1a75c1d... added file1\n\n$ cat file2\ncat: file2: No such file or directory\n\n$ git reflog\n1a75c1d... HEAD@{0}: reset --hard HEAD^: updating HEAD\nf6e5064... HEAD@{1}: commit: added file2\n\n$ git reset --hard f6e5064\nHEAD is now at f6e5064... added file2\n\n$ cat file2\nadded new file\n\n```\n\nYou can see in the example that the file2 was removed as a result of the hard reset, but was put back in place when I reset via the reflog."}}},{"rowIdx":469,"cells":{"qid":{"kind":"number","value":5482,"string":"5,482"},"question":{"kind":"string","value":"
The ASP.NET AJAX ModalPopupExtender has OnCancelScript and OnOkScript properties, but it doesn't seem to have an OnShowScript property. I'd like to specify a javascript function to run each time the popup is shown.
\n\n
In past situations, I set the TargetControlID to a dummy control and provide my own control that first does some JS code and then uses the JS methods to show the popup. But in this case, I am showing the popup from both client and server side code.
\n\n
Anyone know of a way to do this?
\n\n
BTW, I needed this because I have a textbox in the modal that I want to make a TinyMCE editor. But the TinyMCE init script doesn't work on invisible textboxes, so I had to find a way to run it at the time the modal was shown
If you are using a button or hyperlink or something to trigger the popup to show, could you also add an additional handler to the onClick event of the trigger which should still fire the modal popup and run the javascript at the same time?
The ModalPopupExtender modifies the button/hyperlink that you tell it to be the \"trigger\" element. The onclick script I add triggers before the popup is shown. I want script to fire after the popup is shown.
\n\n
Also, still leaves me with the problem of when I show the modal from server side.
hmmm... I'm pretty sure that there's a shown event for the MPE... this is off the top of my head, but I think you can add an event handler to the shown event on page_load
\n\n
function pageLoad()\n{\n var popup = $find('ModalPopupClientID');\n popup.add_shown(SetFocus);\n}\n\nfunction SetFocus()\n{\n $get('TriggerClientId').focus();\n}\n
\n\n
i'm not sure tho if this will help you with calling it from the server side tho
TinyMCE work on invisible textbox if you hide it with css (display:none;)\nYou make an \"onclick\" event on TargetControlID, for init TinyMCE, if you use also an updatepanel
protected void btnNuevo_Click(object sender, EventArgs e)\n{\n //Para recuperar el formulario modal desde el lado del sercidor\n ScriptManager.RegisterStartupScript(Page, Page.GetType(), \"key\", \"<script>launchModal('\" + ModalPopupExtender_Factura.ID.ToString() + \"');</script>\", false);\n}\n
If you are using a button or hyperlink or something to trigger the popup to show, could you also add an additional handler to the onClick event of the trigger which should still fire the modal popup and run the javascript at the same time?
The ModalPopupExtender modifies the button/hyperlink that you tell it to be the \\\"trigger\\\" element. The onclick script I add triggers before the popup is shown. I want script to fire after the popup is shown.
\\n\\n
Also, still leaves me with the problem of when I show the modal from server side.
hmmm... I'm pretty sure that there's a shown event for the MPE... this is off the top of my head, but I think you can add an event handler to the shown event on page_load
\\n\\n
function pageLoad()\\n{\\n var popup = $find('ModalPopupClientID');\\n popup.add_shown(SetFocus);\\n}\\n\\nfunction SetFocus()\\n{\\n $get('TriggerClientId').focus();\\n}\\n
\\n\\n
i'm not sure tho if this will help you with calling it from the server side tho
TinyMCE work on invisible textbox if you hide it with css (display:none;)\\nYou make an \\\"onclick\\\" event on TargetControlID, for init TinyMCE, if you use also an updatepanel
protected void btnNuevo_Click(object sender, EventArgs e)\\n{\\n //Para recuperar el formulario modal desde el lado del sercidor\\n ScriptManager.RegisterStartupScript(Page, Page.GetType(), \\\"key\\\", \\\"<script>launchModal('\\\" + ModalPopupExtender_Factura.ID.ToString() + \\\"');</script>\\\", false);\\n}\\n
\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/07"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/5482","https://Stackoverflow.com","https://Stackoverflow.com/users/698/"],"string":"[\n \"https://Stackoverflow.com/questions/5482\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/698/\"\n]"},"input":{"kind":"string","value":"The ASP.NET AJAX **ModalPopupExtender** has `OnCancelScript` and `OnOkScript` properties, but it doesn't seem to have an `OnShowScript` property. I'd like to specify a javascript function to run each time the popup is shown.\n\nIn past situations, I set the `TargetControlID` to a dummy control and provide my own control that first does some JS code and then uses the JS methods to show the popup. But in this case, I am showing the popup from both client and server side code.\n\nAnyone know of a way to do this?\n\nBTW, I needed this because I have a textbox in the modal that I want to make a TinyMCE editor. But the TinyMCE init script doesn't work on invisible textboxes, so I had to find a way to run it at the time the modal was shown"},"output":{"kind":"string","value":"hmmm... I'm *pretty sure* that there's a shown event for the MPE... this is off the top of my head, but I think you can add an event handler to the shown event on page\\_load\n\n```\nfunction pageLoad()\n{\n var popup = $find('ModalPopupClientID');\n popup.add_shown(SetFocus);\n}\n\nfunction SetFocus()\n{\n $get('TriggerClientId').focus();\n}\n\n```\n\ni'm not sure tho if this will help you with calling it from the server side tho"}}},{"rowIdx":470,"cells":{"qid":{"kind":"number","value":5511,"string":"5,511"},"question":{"kind":"string","value":"
How are you handling the entry of numeric values in WPF applications?
\n\n
Without a NumericUpDown control, I've been using a TextBox and handling its PreviewKeyDown event with the code below, but it's pretty ugly.
\n\n
Has anyone found a more graceful way to get numeric data from the user without relying on a third-party control?
Call me crazy, but why not put plus and minus buttons at either side of the TextBox control and simply prevent the TextBox from receiving cursor focus, thereby creating your own cheap NumericUpDown control?
You can also try using data validation if users commit data before you use it. Doing that I found was fairly simple and cleaner than fiddling about with keys.
This doesn't actually address the validation issues that are referred to in this question, but it addresses what I do about not having a numeric up/down control. Using it for a little bit, I think I might actually like it better than the old numeric up/down control.
\n\n
The code isn't perfect, but it handles the cases I needed it to handle:
\n\n
\n
Up arrow, Down arrow
\n
Shift + Up arrow, Shift + Down arrow
\n
Page Up, Page Down
\n
Binding Converter on the text property
\n
\n\n
Code behind
\n\n
using System;\nusing System.Collections.Generic;\nusing System.Text;\nusing System.Windows;\nusing System.Windows.Controls;\nusing System.Windows.Data;\nusing System.Windows.Input;\n\nnamespace Helpers\n{\n public class TextBoxNumbers\n { \n public static Decimal GetSingleDelta(DependencyObject obj)\n {\n return (Decimal)obj.GetValue(SingleDeltaProperty);\n }\n\n public static void SetSingleDelta(DependencyObject obj, Decimal value)\n {\n obj.SetValue(SingleDeltaProperty, value);\n }\n\n // Using a DependencyProperty as the backing store for SingleValue. This enables animation, styling, binding, etc...\n public static readonly DependencyProperty SingleDeltaProperty =\n DependencyProperty.RegisterAttached(\"SingleDelta\", typeof(Decimal), typeof(TextBoxNumbers), new UIPropertyMetadata(0.0m, new PropertyChangedCallback(f)));\n\n public static void f(DependencyObject o, DependencyPropertyChangedEventArgs e)\n {\n TextBox t = o as TextBox;\n\n if (t == null)\n return;\n\n t.PreviewKeyDown += new System.Windows.Input.KeyEventHandler(t_PreviewKeyDown);\n }\n\n private static Decimal GetSingleValue(DependencyObject obj)\n {\n return GetSingleDelta(obj);\n }\n\n private static Decimal GetDoubleValue(DependencyObject obj)\n {\n return GetSingleValue(obj) * 10;\n }\n\n private static Decimal GetTripleValue(DependencyObject obj)\n {\n return GetSingleValue(obj) * 100;\n }\n\n static void t_PreviewKeyDown(object sender, System.Windows.Input.KeyEventArgs e)\n {\n TextBox t = sender as TextBox;\n Decimal i;\n\n if (t == null)\n return;\n\n if (!Decimal.TryParse(t.Text, out i))\n return;\n\n switch (e.Key)\n {\n case System.Windows.Input.Key.Up:\n if (Keyboard.Modifiers == ModifierKeys.Shift)\n i += GetDoubleValue(t);\n else\n i += GetSingleValue(t);\n break;\n\n case System.Windows.Input.Key.Down:\n if (Keyboard.Modifiers == ModifierKeys.Shift)\n i -= GetDoubleValue(t);\n else\n i -= GetSingleValue(t);\n break;\n\n case System.Windows.Input.Key.PageUp:\n i += GetTripleValue(t);\n break;\n\n case System.Windows.Input.Key.PageDown:\n i -= GetTripleValue(t);\n break;\n\n default:\n return;\n }\n\n if (BindingOperations.IsDataBound(t, TextBox.TextProperty))\n {\n try\n {\n Binding binding = BindingOperations.GetBinding(t, TextBox.TextProperty);\n t.Text = (string)binding.Converter.Convert(i, null, binding.ConverterParameter, binding.ConverterCulture);\n }\n catch\n {\n t.Text = i.ToString();\n }\n }\n else\n t.Text = i.ToString();\n }\n }\n}\n
This is the easiest technique I've found to accomplish this. The down side is that the context menu of the TextBox still allows non-numerics via Paste. To resolve this quickly I simply added the attribute/property: ContextMenu=\"{x:Null}\" to the TextBox thereby disabling it. Not ideal but for my scenario it will suffice.
\n\n
Obviously you could add a few more keys/chars in the test to include additional acceptable values (e.g. '.', '$' etc...)
This is how I do it. It uses a regular expression to check if the text that will be in the box is numeric or not.
\n\n
Regex NumEx = new Regex(@\"^-?\\d*\\.?\\d*$\");\n\nprivate void TextBox_PreviewTextInput(object sender, TextCompositionEventArgs e)\n{\n if (sender is TextBox)\n {\n string text = (sender as TextBox).Text + e.Text;\n e.Handled = !NumEx.IsMatch(text);\n }\n else\n throw new NotImplementedException(\"TextBox_PreviewTextInput Can only Handle TextBoxes\");\n}\n
\n\n
There is now a much better way to do this in WPF and Silverlight. If your control is bound to a property, all you have to do is change your binding statement a bit. Use the following for your binding:
Note that you can use this on custom properties too, all you have to do is throw an exception if the value in the box is invalid and the control will get highlighted with a red border. If you click on the upper right of the red border then the exception message will pop up.
Private Sub Value1TextBox_PreviewTextInput(ByVal sender As Object, ByVal e As TextCompositionEventArgs) Handles Value1TextBox.PreviewTextInput\n Try\n If Not IsNumeric(e.Text) Then\n e.Handled = True\n End If\n Catch ex As Exception\n End Try\nEnd Sub\n
My Version of Arcturus answer, can change the convert method used to work with int / uint / decimal / byte (for colours) or any other numeric format you care to use, also works with copy / paste
\n\n
protected override void OnPreviewTextInput( System.Windows.Input.TextCompositionEventArgs e )\n{\n try\n {\n if ( String.IsNullOrEmpty( SelectedText ) )\n {\n Convert.ToDecimal( this.Text.Insert( this.CaretIndex, e.Text ) );\n }\n else\n {\n Convert.ToDecimal( this.Text.Remove( this.SelectionStart, this.SelectionLength ).Insert( this.SelectionStart, e.Text ) );\n }\n }\n catch\n {\n // mark as handled if cannot convert string to decimal\n e.Handled = true;\n }\n\n base.OnPreviewTextInput( e );\n}\n
Why don't you just try using the KeyDown event rather than the PreviewKeyDown Event. You can stop the invalid characters there, but all the control characters are accepted. This seems to work for me:
I use a custom ValidationRule to check if text is numeric.
\n\n
public class DoubleValidation : ValidationRule\n{\n public override ValidationResult Validate(object value, System.Globalization.CultureInfo cultureInfo)\n {\n if (value is string)\n {\n double number;\n if (!Double.TryParse((value as string), out number))\n return new ValidationResult(false, \"Please enter a valid number\");\n }\n\n return ValidationResult.ValidResult;\n }\n
\n\n
Then when I bind a TextBox to a numeric property, I add the new custom class to the Binding.ValidationRules collection. In the example below the validation rule is checked everytime the TextBox.Text changes.
Combining the ideas from a few of these answers, I have created a NumericTextBox that
\n\n
\n
Handles decimals
\n
Does some basic validation to ensure any entered '-' or '.' is valid
\n
Handles pasted values
\n
\n\n
Please feel free to update if you can think of any other logic that should be included.
\n\n
public class NumericTextBox : TextBox\n{\n public NumericTextBox()\n {\n DataObject.AddPastingHandler(this, OnPaste);\n }\n\n private void OnPaste(object sender, DataObjectPastingEventArgs dataObjectPastingEventArgs)\n {\n var isText = dataObjectPastingEventArgs.SourceDataObject.GetDataPresent(System.Windows.DataFormats.Text, true);\n\n if (isText)\n {\n var text = dataObjectPastingEventArgs.SourceDataObject.GetData(DataFormats.Text) as string;\n if (IsTextValid(text))\n {\n return;\n }\n }\n\n dataObjectPastingEventArgs.CancelCommand();\n }\n\n private bool IsTextValid(string enteredText)\n {\n if (!enteredText.All(c => Char.IsNumber(c) || c == '.' || c == '-'))\n {\n return false;\n }\n\n //We only validation against unselected text since the selected text will be replaced by the entered text\n var unselectedText = this.Text.Remove(SelectionStart, SelectionLength);\n\n if (enteredText == \".\" && unselectedText.Contains(\".\"))\n {\n return false;\n }\n\n if (enteredText == \"-\" && unselectedText.Length > 0)\n {\n return false;\n }\n\n return true;\n }\n\n protected override void OnPreviewTextInput(System.Windows.Input.TextCompositionEventArgs e)\n {\n e.Handled = !IsTextValid(e.Text);\n base.OnPreviewTextInput(e);\n }\n}\n
Call me crazy, but why not put plus and minus buttons at either side of the TextBox control and simply prevent the TextBox from receiving cursor focus, thereby creating your own cheap NumericUpDown control?
You can also try using data validation if users commit data before you use it. Doing that I found was fairly simple and cleaner than fiddling about with keys.
This doesn't actually address the validation issues that are referred to in this question, but it addresses what I do about not having a numeric up/down control. Using it for a little bit, I think I might actually like it better than the old numeric up/down control.
\\n\\n
The code isn't perfect, but it handles the cases I needed it to handle:
\\n\\n
\\n
Up arrow, Down arrow
\\n
Shift + Up arrow, Shift + Down arrow
\\n
Page Up, Page Down
\\n
Binding Converter on the text property
\\n
\\n\\n
Code behind
\\n\\n
using System;\\nusing System.Collections.Generic;\\nusing System.Text;\\nusing System.Windows;\\nusing System.Windows.Controls;\\nusing System.Windows.Data;\\nusing System.Windows.Input;\\n\\nnamespace Helpers\\n{\\n public class TextBoxNumbers\\n { \\n public static Decimal GetSingleDelta(DependencyObject obj)\\n {\\n return (Decimal)obj.GetValue(SingleDeltaProperty);\\n }\\n\\n public static void SetSingleDelta(DependencyObject obj, Decimal value)\\n {\\n obj.SetValue(SingleDeltaProperty, value);\\n }\\n\\n // Using a DependencyProperty as the backing store for SingleValue. This enables animation, styling, binding, etc...\\n public static readonly DependencyProperty SingleDeltaProperty =\\n DependencyProperty.RegisterAttached(\\\"SingleDelta\\\", typeof(Decimal), typeof(TextBoxNumbers), new UIPropertyMetadata(0.0m, new PropertyChangedCallback(f)));\\n\\n public static void f(DependencyObject o, DependencyPropertyChangedEventArgs e)\\n {\\n TextBox t = o as TextBox;\\n\\n if (t == null)\\n return;\\n\\n t.PreviewKeyDown += new System.Windows.Input.KeyEventHandler(t_PreviewKeyDown);\\n }\\n\\n private static Decimal GetSingleValue(DependencyObject obj)\\n {\\n return GetSingleDelta(obj);\\n }\\n\\n private static Decimal GetDoubleValue(DependencyObject obj)\\n {\\n return GetSingleValue(obj) * 10;\\n }\\n\\n private static Decimal GetTripleValue(DependencyObject obj)\\n {\\n return GetSingleValue(obj) * 100;\\n }\\n\\n static void t_PreviewKeyDown(object sender, System.Windows.Input.KeyEventArgs e)\\n {\\n TextBox t = sender as TextBox;\\n Decimal i;\\n\\n if (t == null)\\n return;\\n\\n if (!Decimal.TryParse(t.Text, out i))\\n return;\\n\\n switch (e.Key)\\n {\\n case System.Windows.Input.Key.Up:\\n if (Keyboard.Modifiers == ModifierKeys.Shift)\\n i += GetDoubleValue(t);\\n else\\n i += GetSingleValue(t);\\n break;\\n\\n case System.Windows.Input.Key.Down:\\n if (Keyboard.Modifiers == ModifierKeys.Shift)\\n i -= GetDoubleValue(t);\\n else\\n i -= GetSingleValue(t);\\n break;\\n\\n case System.Windows.Input.Key.PageUp:\\n i += GetTripleValue(t);\\n break;\\n\\n case System.Windows.Input.Key.PageDown:\\n i -= GetTripleValue(t);\\n break;\\n\\n default:\\n return;\\n }\\n\\n if (BindingOperations.IsDataBound(t, TextBox.TextProperty))\\n {\\n try\\n {\\n Binding binding = BindingOperations.GetBinding(t, TextBox.TextProperty);\\n t.Text = (string)binding.Converter.Convert(i, null, binding.ConverterParameter, binding.ConverterCulture);\\n }\\n catch\\n {\\n t.Text = i.ToString();\\n }\\n }\\n else\\n t.Text = i.ToString();\\n }\\n }\\n}\\n
This is the easiest technique I've found to accomplish this. The down side is that the context menu of the TextBox still allows non-numerics via Paste. To resolve this quickly I simply added the attribute/property: ContextMenu=\\\"{x:Null}\\\" to the TextBox thereby disabling it. Not ideal but for my scenario it will suffice.
\\n\\n
Obviously you could add a few more keys/chars in the test to include additional acceptable values (e.g. '.', '$' etc...)
This is how I do it. It uses a regular expression to check if the text that will be in the box is numeric or not.
\\n\\n
Regex NumEx = new Regex(@\\\"^-?\\\\d*\\\\.?\\\\d*$\\\");\\n\\nprivate void TextBox_PreviewTextInput(object sender, TextCompositionEventArgs e)\\n{\\n if (sender is TextBox)\\n {\\n string text = (sender as TextBox).Text + e.Text;\\n e.Handled = !NumEx.IsMatch(text);\\n }\\n else\\n throw new NotImplementedException(\\\"TextBox_PreviewTextInput Can only Handle TextBoxes\\\");\\n}\\n
\\n\\n
There is now a much better way to do this in WPF and Silverlight. If your control is bound to a property, all you have to do is change your binding statement a bit. Use the following for your binding:
Note that you can use this on custom properties too, all you have to do is throw an exception if the value in the box is invalid and the control will get highlighted with a red border. If you click on the upper right of the red border then the exception message will pop up.
Private Sub Value1TextBox_PreviewTextInput(ByVal sender As Object, ByVal e As TextCompositionEventArgs) Handles Value1TextBox.PreviewTextInput\\n Try\\n If Not IsNumeric(e.Text) Then\\n e.Handled = True\\n End If\\n Catch ex As Exception\\n End Try\\nEnd Sub\\n
My Version of Arcturus answer, can change the convert method used to work with int / uint / decimal / byte (for colours) or any other numeric format you care to use, also works with copy / paste
\\n\\n
protected override void OnPreviewTextInput( System.Windows.Input.TextCompositionEventArgs e )\\n{\\n try\\n {\\n if ( String.IsNullOrEmpty( SelectedText ) )\\n {\\n Convert.ToDecimal( this.Text.Insert( this.CaretIndex, e.Text ) );\\n }\\n else\\n {\\n Convert.ToDecimal( this.Text.Remove( this.SelectionStart, this.SelectionLength ).Insert( this.SelectionStart, e.Text ) );\\n }\\n }\\n catch\\n {\\n // mark as handled if cannot convert string to decimal\\n e.Handled = true;\\n }\\n\\n base.OnPreviewTextInput( e );\\n}\\n
Why don't you just try using the KeyDown event rather than the PreviewKeyDown Event. You can stop the invalid characters there, but all the control characters are accepted. This seems to work for me:
I use a custom ValidationRule to check if text is numeric.
\\n\\n
public class DoubleValidation : ValidationRule\\n{\\n public override ValidationResult Validate(object value, System.Globalization.CultureInfo cultureInfo)\\n {\\n if (value is string)\\n {\\n double number;\\n if (!Double.TryParse((value as string), out number))\\n return new ValidationResult(false, \\\"Please enter a valid number\\\");\\n }\\n\\n return ValidationResult.ValidResult;\\n }\\n
\\n\\n
Then when I bind a TextBox to a numeric property, I add the new custom class to the Binding.ValidationRules collection. In the example below the validation rule is checked everytime the TextBox.Text changes.
Combining the ideas from a few of these answers, I have created a NumericTextBox that
\\n\\n
\\n
Handles decimals
\\n
Does some basic validation to ensure any entered '-' or '.' is valid
\\n
Handles pasted values
\\n
\\n\\n
Please feel free to update if you can think of any other logic that should be included.
\\n\\n
public class NumericTextBox : TextBox\\n{\\n public NumericTextBox()\\n {\\n DataObject.AddPastingHandler(this, OnPaste);\\n }\\n\\n private void OnPaste(object sender, DataObjectPastingEventArgs dataObjectPastingEventArgs)\\n {\\n var isText = dataObjectPastingEventArgs.SourceDataObject.GetDataPresent(System.Windows.DataFormats.Text, true);\\n\\n if (isText)\\n {\\n var text = dataObjectPastingEventArgs.SourceDataObject.GetData(DataFormats.Text) as string;\\n if (IsTextValid(text))\\n {\\n return;\\n }\\n }\\n\\n dataObjectPastingEventArgs.CancelCommand();\\n }\\n\\n private bool IsTextValid(string enteredText)\\n {\\n if (!enteredText.All(c => Char.IsNumber(c) || c == '.' || c == '-'))\\n {\\n return false;\\n }\\n\\n //We only validation against unselected text since the selected text will be replaced by the entered text\\n var unselectedText = this.Text.Remove(SelectionStart, SelectionLength);\\n\\n if (enteredText == \\\".\\\" && unselectedText.Contains(\\\".\\\"))\\n {\\n return false;\\n }\\n\\n if (enteredText == \\\"-\\\" && unselectedText.Length > 0)\\n {\\n return false;\\n }\\n\\n return true;\\n }\\n\\n protected override void OnPreviewTextInput(System.Windows.Input.TextCompositionEventArgs e)\\n {\\n e.Handled = !IsTextValid(e.Text);\\n base.OnPreviewTextInput(e);\\n }\\n}\\n
\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/08"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/5511","https://Stackoverflow.com","https://Stackoverflow.com/users/615/"],"string":"[\n \"https://Stackoverflow.com/questions/5511\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/615/\"\n]"},"input":{"kind":"string","value":"How are you handling the entry of numeric values in WPF applications?\n\nWithout a NumericUpDown control, I've been using a TextBox and handling its PreviewKeyDown event with the code below, but it's pretty ugly.\n\nHas anyone found a more graceful way to get numeric data from the user without relying on a third-party control?\n\n```\nprivate void NumericEditPreviewKeyDown(object sender, KeyEventArgs e)\n{\n bool isNumPadNumeric = (e.Key >= Key.NumPad0 && e.Key <= Key.NumPad9) || e.Key == Key.Decimal;\n bool isNumeric = (e.Key >= Key.D0 && e.Key <= Key.D9) || e.Key == Key.OemPeriod;\n\n if ((isNumeric || isNumPadNumeric) && Keyboard.Modifiers != ModifierKeys.None)\n {\n e.Handled = true;\n return;\n }\n\n bool isControl = ((Keyboard.Modifiers != ModifierKeys.None && Keyboard.Modifiers != ModifierKeys.Shift)\n || e.Key == Key.Back || e.Key == Key.Delete || e.Key == Key.Insert\n || e.Key == Key.Down || e.Key == Key.Left || e.Key == Key.Right || e.Key == Key.Up\n || e.Key == Key.Tab\n || e.Key == Key.PageDown || e.Key == Key.PageUp\n || e.Key == Key.Enter || e.Key == Key.Return || e.Key == Key.Escape\n || e.Key == Key.Home || e.Key == Key.End);\n\n e.Handled = !isControl && !isNumeric && !isNumPadNumeric;\n}\n\n```"},"output":{"kind":"string","value":"How about:\n\n```\nprotected override void OnPreviewTextInput(System.Windows.Input.TextCompositionEventArgs e)\n{\n e.Handled = !AreAllValidNumericChars(e.Text);\n base.OnPreviewTextInput(e);\n}\n\nprivate bool AreAllValidNumericChars(string str)\n{\n foreach(char c in str)\n {\n if(!Char.IsNumber(c)) return false;\n }\n\n return true;\n}\n\n```"}}},{"rowIdx":471,"cells":{"qid":{"kind":"number","value":5611,"string":"5,611"},"question":{"kind":"string","value":"
I know that just using rand() is predictable, if you know what you're doing, and have access to the server.
\n\n
I have a project that is highly dependent upon choosing a random number that is as unpredictable as possible. So I'm looking for suggestions, either other built-in functions or user functions, that can generate a better random number.
You're right about PHP rand() function. See the second figure on Statistical Analysis for a striking illustration. (The first figure is striking, but it's been drawn by Scott Adams, not plotted with rand()).
\n\n
One solution is to use a true random generator such as random.org. Another, if you're on Linux/BSD/etc. is to use /dev/random. If the randomness is mission critical, you will have to use a hardware random generator.
I would be wary of the impression of randomness: there have been many experiments where people would choose the less random distribution. It seems the mind is not very good at producing or estimating randomness.
\n\n
There are good articles on randomness at Fourmilab, including another true random generator. Maybe you could get random data from both sites so if one is down you still have the other.
\n\n
Fourmilab also provides a test program to check randomness. You could use it to check your various myRand() programs.
\n\n
As for your last program, if you generate 10000 values, why don't you choose the final value amongst the 10 thousand? You restrict yourself to a subset. Also, it won't work if your $min and $max are greater than 10000.
\n\n
Anyway, the randomness you need depends on your application. rand() will be OK for an online game, but not OK for cryptography (anything not thoroughly tested with statistical programs will not be suitable for cryptography anyway). You be the judge!
Generates cryptographic random integers that are suitable for use\n where unbiased results are critical (i.e. shuffling a Poker deck).
\n
\n\n
For a more detailed explanation about PRNG and CSPRNG (and their difference) as well as why your original approach is actually a bad idea, please read my another highly similar answer.
You're right about PHP rand() function. See the second figure on Statistical Analysis for a striking illustration. (The first figure is striking, but it's been drawn by Scott Adams, not plotted with rand()).
\\n\\n
One solution is to use a true random generator such as random.org. Another, if you're on Linux/BSD/etc. is to use /dev/random. If the randomness is mission critical, you will have to use a hardware random generator.
I would be wary of the impression of randomness: there have been many experiments where people would choose the less random distribution. It seems the mind is not very good at producing or estimating randomness.
\\n\\n
There are good articles on randomness at Fourmilab, including another true random generator. Maybe you could get random data from both sites so if one is down you still have the other.
\\n\\n
Fourmilab also provides a test program to check randomness. You could use it to check your various myRand() programs.
\\n\\n
As for your last program, if you generate 10000 values, why don't you choose the final value amongst the 10 thousand? You restrict yourself to a subset. Also, it won't work if your $min and $max are greater than 10000.
\\n\\n
Anyway, the randomness you need depends on your application. rand() will be OK for an online game, but not OK for cryptography (anything not thoroughly tested with statistical programs will not be suitable for cryptography anyway). You be the judge!
Generates cryptographic random integers that are suitable for use\\n where unbiased results are critical (i.e. shuffling a Poker deck).
\\n
\\n\\n
For a more detailed explanation about PRNG and CSPRNG (and their difference) as well as why your original approach is actually a bad idea, please read my another highly similar answer.
\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/08"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/5611","https://Stackoverflow.com","https://Stackoverflow.com/users/115/"],"string":"[\n \"https://Stackoverflow.com/questions/5611\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/115/\"\n]"},"input":{"kind":"string","value":"I know that just using `rand()` is predictable, if you know what you're doing, and have access to the server.\n\nI have a project that is **highly** dependent upon choosing a random number that is as unpredictable as possible. So I'm looking for suggestions, either other built-in functions or user functions, that can generate a *better* random number.\n\nI used this to do a little test:\n\n```\n$i = 0;\n\nwhile($i < 10000){\n $rand = rand(0, 100);\n\n if(!isset($array[$rand])){\n $array[$rand] = 1;\n } else {\n $array[$rand]++;\n }\n\n sort($array);\n $i++;\n}\n\n```\n\nI found the results to be evenly distributed, and there is an odd pattern to the number of times each number is generated."},"output":{"kind":"string","value":"Adding, multiplying, or truncating a poor random source will give you a poor random result. See [Introduction to Randomness and Random Numbers](http://random.org/randomness/) for an explanation.\n\nYou're right about PHP rand() function. See the second figure on [Statistical Analysis](http://random.org/analysis/) for a striking illustration. (The first figure is striking, but it's been drawn by Scott Adams, not plotted with rand()).\n\nOne solution is to use a true random generator such as [random.org](http://random.org/integers/). Another, if you're on Linux/BSD/etc. is to use [/dev/random](http://en.wikipedia.org/wiki/Urandom). If the randomness is mission critical, you will have to use a [hardware random generator](http://en.wikipedia.org/wiki/Hardware_random_number_generator)."}}},{"rowIdx":472,"cells":{"qid":{"kind":"number","value":5694,"string":"5,694"},"question":{"kind":"string","value":"
I got this error today when trying to open a Visual Studio 2008 project in Visual Studio 2005:
\n\n
\n
The imported project \"C:\\Microsoft.CSharp.targets\" was not found.
This link on MSDN also helps a lot to understand the reason why it doesn't work. $(MSBuildToolsPath) is the path to Microsoft.Build.Engine v3.5 (inserted automatically in a project file when you create in VS2008). If you try to build your project for .Net 2.0, be sure that you changed this path to $(MSBuildBinPath) which is the path to Microsoft.Build.Engine v2.0.
I got this after reinstalling Windows. Visual Studio was installed, and I could see the Silverlight project type in the New Project window, but opening one didn't work. The solution was simple: I had to install the Silverlight Developer runtime and/or the Microsoft Silverlight 4 Tools for Visual Studio. This may seem stupid, but I overlooked it because I thought it should work, as the Silverlight project type was available.
This error can also occur when opening a Silverlight project that was built in SL 4, while you have SL 5 installed.
\n\n
Here is an example error message: The imported project \"C:\\Program Files (x86)\\MSBuild\\Microsoft\\Silverlight\\v4.0\\Microsoft.Silverlight.CSharp.targets\" was not found.
If you are to encounter the error that says Microsoft.CSharp.Core.targets not found, these are the steps I took to correct mine:
\n\n\n
Open any previous working projects folder and navigate to the link showed in the error, that is Projects/(working project name)/packages/Microsoft.Net.Compilers.1.3.2/tools/ and search for Microsoft.CSharp.Core.targets file.
\n
Copy this file and put it in the non-working project tools folder (that is, navigating to the tools folder in the non-working project as shown above)
\n
Now close your project (if it was open) and reopen it.
\n\n\n
It should be working now.
\n\n
Also, to make sure everything is working properly in your now open Visual Studio Project, Go to Tools > NuGetPackage Manager > Manage NuGet Packages For Solution. Here, you might find an error that says, CodeAnalysis.dll is being used by another application.
\n\n
Again, go to the tools folder, find the specified file and delete it. Come back to Manage NuGet Packages For Solution. You will find a link that will ask you to Reload, click it and everything gets re-installed.
For me the issue was that the path of the project contained %20 characters, because git added those instead of spaces when the repository was cloned. Another problem might be if the path to a package is too long.
You should check if the number is correct (the reason it's wrong can be the project was created with another version of Visual Studio). If it's wrong, replace it with your current version of build tools OR use the VS variable:
I ran into this issue while executing an Ansible playbook so I want to add my 2 cents here. I noticed a warning message about missing Visual Studio 14. Visual Studio version 14 was released in 2015 and the solution to my problem was installing Visual Studio 2015 Professional on the host machine of my Azure DevOps agent.
After trying to restore, closing VS, deleting the failed package, reopening, trying to restore, multiple times I just deleted everything in packages and when I did a restore and it worked perfectly.
For me, the issue was the path.. When cloning the project that had a space in the name. The project folder was named "Sample%20-%205" instead of what it should be: "Sample - 5"
\n
Opening the project was fine, but building failed with
\n
\n
Could not find the file:\n/packages/Microsoft.Net.Compilers.1.3.2/tools/Microsoft.CSharp.Core.targets
For me the issue was that the solution was to deep into the documents folder and on windows 10 there is a path character limit which was reached. As soon as I moved the solution folder up couple of folders this fixed the issue.
This link on MSDN also helps a lot to understand the reason why it doesn't work. $(MSBuildToolsPath) is the path to Microsoft.Build.Engine v3.5 (inserted automatically in a project file when you create in VS2008). If you try to build your project for .Net 2.0, be sure that you changed this path to $(MSBuildBinPath) which is the path to Microsoft.Build.Engine v2.0.
I got this after reinstalling Windows. Visual Studio was installed, and I could see the Silverlight project type in the New Project window, but opening one didn't work. The solution was simple: I had to install the Silverlight Developer runtime and/or the Microsoft Silverlight 4 Tools for Visual Studio. This may seem stupid, but I overlooked it because I thought it should work, as the Silverlight project type was available.
This error can also occur when opening a Silverlight project that was built in SL 4, while you have SL 5 installed.
\\n\\n
Here is an example error message: The imported project \\\"C:\\\\Program Files (x86)\\\\MSBuild\\\\Microsoft\\\\Silverlight\\\\v4.0\\\\Microsoft.Silverlight.CSharp.targets\\\" was not found.
If you are to encounter the error that says Microsoft.CSharp.Core.targets not found, these are the steps I took to correct mine:
\\n\\n\\n
Open any previous working projects folder and navigate to the link showed in the error, that is Projects/(working project name)/packages/Microsoft.Net.Compilers.1.3.2/tools/ and search for Microsoft.CSharp.Core.targets file.
\\n
Copy this file and put it in the non-working project tools folder (that is, navigating to the tools folder in the non-working project as shown above)
\\n
Now close your project (if it was open) and reopen it.
\\n\\n\\n
It should be working now.
\\n\\n
Also, to make sure everything is working properly in your now open Visual Studio Project, Go to Tools > NuGetPackage Manager > Manage NuGet Packages For Solution. Here, you might find an error that says, CodeAnalysis.dll is being used by another application.
\\n\\n
Again, go to the tools folder, find the specified file and delete it. Come back to Manage NuGet Packages For Solution. You will find a link that will ask you to Reload, click it and everything gets re-installed.
For me the issue was that the path of the project contained %20 characters, because git added those instead of spaces when the repository was cloned. Another problem might be if the path to a package is too long.
You should check if the number is correct (the reason it's wrong can be the project was created with another version of Visual Studio). If it's wrong, replace it with your current version of build tools OR use the VS variable:
I ran into this issue while executing an Ansible playbook so I want to add my 2 cents here. I noticed a warning message about missing Visual Studio 14. Visual Studio version 14 was released in 2015 and the solution to my problem was installing Visual Studio 2015 Professional on the host machine of my Azure DevOps agent.
After trying to restore, closing VS, deleting the failed package, reopening, trying to restore, multiple times I just deleted everything in packages and when I did a restore and it worked perfectly.
For me, the issue was the path.. When cloning the project that had a space in the name. The project folder was named "Sample%20-%205" instead of what it should be: "Sample - 5"
\\n
Opening the project was fine, but building failed with
\\n
\\n
Could not find the file:\\n/packages/Microsoft.Net.Compilers.1.3.2/tools/Microsoft.CSharp.Core.targets
For me the issue was that the solution was to deep into the documents folder and on windows 10 there is a path character limit which was reached. As soon as I moved the solution folder up couple of folders this fixed the issue.
\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/08"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/5694","https://Stackoverflow.com","https://Stackoverflow.com/users/493/"],"string":"[\n \"https://Stackoverflow.com/questions/5694\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/493/\"\n]"},"input":{"kind":"string","value":"I got this error today when trying to open a Visual Studio 2008 **project** in Visual Studio 2005:\n\n> \n> The imported project \"C:\\Microsoft.CSharp.targets\" was not found.\n> \n> \n>"},"output":{"kind":"string","value":"Open your csproj file in notepad (or notepad++)\nFind the line: \n\n```\n\n\n```\n\nand change it to\n\n```\n\n\n```"}}},{"rowIdx":473,"cells":{"qid":{"kind":"number","value":5727,"string":"5,727"},"question":{"kind":"string","value":"
Why are pointers such a leading factor of confusion for many new, and even old, college level students in C or C++? Are there any tools or thought processes that helped you understand how pointers work at the variable, function, and beyond level?
\n\n
What are some good practice things that can be done to bring somebody to the level of, \"Ah-hah, I got it,\" without getting them bogged down in the overall concept? Basically, drill like scenarios.
Why are pointers such a leading factor of confusion for many new, and even old, college level students in the C/C++ language?
\n
\n\n
The concept of a placeholder for a value - variables - maps onto something we're taught in school - algebra. There isn't an existing parallel you can draw without understanding how memory is physically laid out within a computer, and no one thinks about this kind of thing until they're dealing with low level things - at the C/C++/byte communications level.
\n\n
\n
Are there any tools or thought processes that helped you understand how pointers work at the variable, function, and beyond level?
\n
\n\n
Addresses boxes. I remember when I was learning to program BASIC into microcomputers, there were these pretty books with games in them, and sometimes you had to poke values into particular addresses. They had a picture of a bunch of boxes, incrementally labelled with 0, 1, 2... and it was explained that only one small thing (a byte) could fit in these boxes, and there were a lot of them - some computers had as many as 65535! They were next to each other, and they all had an address.
\n\n
\n
What are some good practice things that can be done to bring somebody to the level of, \"Ah-hah, I got it,\" without getting them bogged down in the overall concept? Basically, drill like scenarios.
I don't think pointers as a concept are particularly tricky - most students' mental models map to something like this and some quick box sketches can help.
\n\n
The difficulty, at least that which I've experienced in the past and seen others deal with, is that the management of pointers in C/C++ can be unncessarily convoluted.
I don't see what is so confusing about pointers. They point to a location in memory, that is it stores the memory address. In C/C++ you can specify the type the pointer points to. For example:
\n\n
int* my_int_pointer;\n
\n\n
Says that my_int_pointer contains the address to a location that contains an int.
\n\n
The problem with pointers is that they point to a location in memory, so it is easy to trail off into some location you should not be in. As proof look at the numerous security holes in C/C++ applications from buffer overflow (incrementing the pointer past the allocated boundary).
The reason pointers seem to confuse so many people is that they mostly come with little or no background in computer architecture. Since many don't seem to have an idea of how computers (the machine) is actually implemented - working in C/C++ seems alien.
\n\n
A drill is to ask them to implement a simple bytecode based virtual machine (in any language they chose, python works great for this) with an instruction set focussed on pointer operations (load, store, direct/indirect addressing). Then ask them to write simple programs for that instruction set.
\n\n
Anything requiring slightly more than simple addition is going to involve pointers and they are sure to get it.
I think that the main reason that people have trouble with it is because it's generally not taught in an interesting and engaging manner. I'd like to see a lecturer get 10 volunteers from the crowd and give them a 1 meter ruler each, get them to stand around in a certain configuration and use the rulers to point at each other. Then show pointer arithmetic by moving people around (and where they point their rulers). It'd be a simple but effective (and above all memorable) way of showing the concepts without getting too bogged down in the mechanics.
\n\n
Once you get to C and C++ it seems to get harder for some people. I'm not sure if this is because they are finally putting theory that they don't properly grasp into practice or because pointer manipulation is inherently harder in those languages. I can't remember my own transition that well, but I knew pointers in Pascal and then moved to C and got totally lost.
\n"},{"answer_id":5754,"author":"Lasse V. Karlsen","author_id":267,"author_profile":"https://Stackoverflow.com/users/267","pm_score":11,"selected":true,"text":"
Pointers is a concept that for many can be confusing at first, in particular when it comes to copying pointer values around and still referencing the same memory block.
\n\n
I've found that the best analogy is to consider the pointer as a piece of paper with a house address on it, and the memory block it references as the actual house. All sorts of operations can thus be easily explained.
\n\n
I've added some Delphi code down below, and some comments where appropriate. I chose Delphi since my other main programming language, C#, does not exhibit things like memory leaks in the same way.
\n\n
If you only wish to learn the high-level concept of pointers, then you should ignore the parts labelled \"Memory layout\" in the explanation below. They are intended to give examples of what memory could look like after operations, but they are more low-level in nature. However, in order to accurately explain how buffer overruns really work, it was important that I added these diagrams.
\n\n
Disclaimer: For all intents and purposes, this explanation and the example memory\nlayouts are vastly simplified. There's more overhead and a lot more details you would\nneed to know if you need to deal with memory on a low-level basis. However, for the\nintents of explaining memory and pointers, it is accurate enough.
\n\n\n\n
Let's assume the THouse class used below looks like this:
When you initialize the house object, the name given to the constructor is copied into the private field FName. There is a reason it is defined as a fixed-size array.
\n\n
In memory, there will be some overhead associated with the house allocation, I'll illustrate this below like this:
The \"tttt\" area is overhead, there will typically be more of this for various types of runtimes and languages, like 8 or 12 bytes. It is imperative that whatever values are stored in this area never gets changed by anything other than the memory allocator or the core system routines, or you risk crashing the program.
\n\n\n\n
Allocate memory
\n\n
Get an entrepreneur to build your house, and give you the address to the house. In contrast to the real world, memory allocation cannot be told where to allocate, but will find a suitable spot with enough room, and report back the address to the allocated memory.
\n\n
In other words, the entrepreneur will choose the spot.
\n\n
THouse.Create('My house');\n
\n\n
Memory layout:
\n\n
\n---[ttttNNNNNNNNNN]---\n 1234My house\n
\n\n\n\n
Keep a variable with the address
\n\n
Write the address to your new house down on a piece of paper. This paper will serve as your reference to your house. Without this piece of paper, you're lost, and cannot find the house, unless you're already in it.
\n\n
var\n h: THouse;\nbegin\n h := THouse.Create('My house');\n ...\n
\n\n
Memory layout:
\n\n
\n h\n v\n---[ttttNNNNNNNNNN]---\n 1234My house\n
\n\n\n\n
Copy pointer value
\n\n
Just write the address on a new piece of paper. You now have two pieces of paper that will get you to the same house, not two separate houses. Any attempts to follow the address from one paper and rearrange the furniture at that house will make it seem that the other house has been modified in the same manner, unless you can explicitly detect that it's actually just one house.
\n\n
Note This is usually the concept that I have the most problem explaining to people, two pointers does not mean two objects or memory blocks.
\n\n
var\n h1, h2: THouse;\nbegin\n h1 := THouse.Create('My house');\n h2 := h1; // copies the address, not the house\n ...\n
Demolish the house. You can then later on reuse the paper for a new address if you so wish, or clear it to forget the address to the house that no longer exists.
\n\n
var\n h: THouse;\nbegin\n h := THouse.Create('My house');\n ...\n h.Free;\n h := nil;\n
\n\n
Here I first construct the house, and get hold of its address. Then I do something to the house (use it, the ... code, left as an exercise for the reader), and then I free it. Lastly I clear the address from my variable.
\n\n
Memory layout:
\n\n
\n h <--+\n v +- before free\n---[ttttNNNNNNNNNN]--- |\n 1234My house <--+\n\n h (now points nowhere) <--+\n +- after free\n---------------------- | (note, memory might still\n xx34My house <--+ contain some data)\n
\n\n\n\n
Dangling pointers
\n\n
You tell your entrepreneur to destroy the house, but you forget to erase the address from your piece of paper. When later on you look at the piece of paper, you've forgotten that the house is no longer there, and goes to visit it, with failed results (see also the part about an invalid reference below).
\n\n
var\n h: THouse;\nbegin\n h := THouse.Create('My house');\n ...\n h.Free;\n ... // forgot to clear h here\n h.OpenFrontDoor; // will most likely fail\n
\n\n
Using h after the call to .Freemight work, but that is just pure luck. Most likely it will fail, at a customers place, in the middle of a critical operation.
\n\n
\n h <--+\n v +- before free\n---[ttttNNNNNNNNNN]--- |\n 1234My house <--+\n\n h <--+\n v +- after free\n---------------------- |\n xx34My house <--+\n
\n\n
As you can see, h still points to the remnants of the data in memory, but\nsince it might not be complete, using it as before might fail.
\n\n\n\n
Memory leak
\n\n
You lose the piece of paper and cannot find the house. The house is still standing somewhere though, and when you later on want to construct a new house, you cannot reuse that spot.
\n\n
var\n h: THouse;\nbegin\n h := THouse.Create('My house');\n h := THouse.Create('My house'); // uh-oh, what happened to our first house?\n ...\n h.Free;\n h := nil;\n
\n\n
Here we overwrote the contents of the h variable with the address of a new house, but the old one is still standing... somewhere. After this code, there is no way to reach that house, and it will be left standing. In other words, the allocated memory will stay allocated until the application closes, at which point the operating system will tear it down.
\n\n
Memory layout after first allocation:
\n\n
\n h\n v\n---[ttttNNNNNNNNNN]---\n 1234My house\n
\n\n
Memory layout after second allocation:
\n\n
\n h\n v\n---[ttttNNNNNNNNNN]---[ttttNNNNNNNNNN]\n 1234My house 5678My house\n
\n\n
A more common way to get this method is just to forget to free something, instead of overwriting it as above. In Delphi terms, this will occur with the following method:
\n\n
procedure OpenTheFrontDoorOfANewHouse;\nvar\n h: THouse;\nbegin\n h := THouse.Create('My house');\n h.OpenFrontDoor;\n // uh-oh, no .Free here, where does the address go?\nend;\n
\n\n
After this method has executed, there's no place in our variables that the address to the house exists, but the house is still out there.
\n\n
Memory layout:
\n\n
\n h <--+\n v +- before losing pointer\n---[ttttNNNNNNNNNN]--- |\n 1234My house <--+\n\n h (now points nowhere) <--+\n +- after losing pointer\n---[ttttNNNNNNNNNN]--- |\n 1234My house <--+\n
\n\n
As you can see, the old data is left intact in memory, and will not\nbe reused by the memory allocator. The allocator keeps track of which\nareas of memory has been used, and will not reuse them unless you\nfree it.
\n\n\n\n
Freeing the memory but keeping a (now invalid) reference
\n\n
Demolish the house, erase one of the pieces of paper but you also have another piece of paper with the old address on it, when you go to the address, you won't find a house, but you might find something that resembles the ruins of one.
\n\n
Perhaps you will even find a house, but it is not the house you were originally given the address to, and thus any attempts to use it as though it belongs to you might fail horribly.
\n\n
Sometimes you might even find that a neighbouring address has a rather big house set up on it that occupies three address (Main Street 1-3), and your address goes to the middle of the house. Any attempts to treat that part of the large 3-address house as a single small house might also fail horribly.
\n\n
var\n h1, h2: THouse;\nbegin\n h1 := THouse.Create('My house');\n h2 := h1; // copies the address, not the house\n ...\n h1.Free;\n h1 := nil;\n h2.OpenFrontDoor; // uh-oh, what happened to our house?\n
\n\n
Here the house was torn down, through the reference in h1, and while h1 was cleared as well, h2 still has the old, out-of-date, address. Access to the house that is no longer standing might or might not work.
\n\n
This is a variation of the dangling pointer above. See its memory layout.
\n\n\n\n
Buffer overrun
\n\n
You move more stuff into the house than you can possibly fit, spilling into the neighbours house or yard. When the owner of that neighbouring house later on comes home, he'll find all sorts of things he'll consider his own.
\n\n
This is the reason I chose a fixed-size array. To set the stage, assume that\nthe second house we allocate will, for some reason, be placed before the\nfirst one in memory. In other words, the second house will have a lower\naddress than the first one. Also, they're allocated right next to each other.
\n\n
Thus, this code:
\n\n
var\n h1, h2: THouse;\nbegin\n h1 := THouse.Create('My house');\n h2 := THouse.Create('My other house somewhere');\n ^-----------------------^\n longer than 10 characters\n 0123456789 <-- 10 characters\n
\n h2 h1\n v v\n---[ttttNNNNNNNNNN]----[ttttNNNNNNNNNN]\n 1234My other house somewhereouse\n ^---+--^\n |\n +- overwritten\n
\n\n
The part that will most often cause crash is when you overwrite important parts\nof the data you stored that really should not be randomly changed. For instance\nit might not be a problem that parts of the name of the h1-house was changed,\nin terms of crashing the program, but overwriting the overhead of the\nobject will most likely crash when you try to use the broken object,\nas will overwriting links that is stored to\nother objects in the object.
\n\n\n\n
Linked lists
\n\n
When you follow an address on a piece of paper, you get to a house, and at that house there is another piece of paper with a new address on it, for the next house in the chain, and so on.
Here we create a link from our home house to our cabin. We can follow the chain until a house has no NextHouse reference, which means it's the last one. To visit all our houses, we could use the following code:
\n\n
var\n h1, h2: THouse;\n h: THouse;\nbegin\n h1 := THouse.Create('Home');\n h2 := THouse.Create('Cabin');\n h1.NextHouse := h2;\n ...\n h := h1;\n while h <> nil do\n begin\n h.LockAllDoors;\n h.CloseAllWindows;\n h := h.NextHouse;\n end;\n
\n\n
Memory layout (added NextHouse as a link in the object, noted with\nthe four LLLL's in the below diagram):
\n\n
\n h1 h2\n v v\n---[ttttNNNNNNNNNNLLLL]----[ttttNNNNNNNNNNLLLL]\n 1234Home + 5678Cabin +\n | ^ |\n +--------+ * (no link)\n
\n\n\n\n
In basic terms, what is a memory address?
\n\n
A memory address is in basic terms just a number. If you think of memory\nas a big array of bytes, the very first byte has the address 0, the next one\nthe address 1 and so on upwards. This is simplified, but good enough.
\n\n
So this memory layout:
\n\n
\n h1 h2\n v v\n---[ttttNNNNNNNNNN]---[ttttNNNNNNNNNN]\n 1234My house 5678My house\n
\n\n
Might have these two address (the leftmost - is address 0):
\n\n
\n
h1 = 4
\n
h2 = 23
\n
\n\n
Which means that our linked list above might actuall look like this:
\n\n
\n h1 (=4) h2 (=28)\n v v\n---[ttttNNNNNNNNNNLLLL]----[ttttNNNNNNNNNNLLLL]\n 1234Home 0028 5678Cabin 0000\n | ^ |\n +--------+ * (no link)\n
\n\n
It is typical to store an address that \"points nowhere\" as a zero-address.
\n\n\n\n
In basic terms, what is a pointer?
\n\n
A pointer is just a variable holding a memory address. You can typically ask the programming\nlanguage to give you its number, but most programming languages and runtimes tries to\nhide the fact that there is a number beneath, just because the number itself does not\nreally hold any meaning to you. It is best to think of a pointer as a black box, ie.\nyou don't really know or care about how it is actually implemented, just as long as it\nworks.
For some reason most people seem to be born without the part of the brain that understands pointers. This is an aptitude thing, not a skill thing – it requires a complex form of doubly-indirected thinking that some people just can't do.
The complexities of pointers go beyond what we can easily teach. Having students point to each other and using pieces of paper with house addresses are both great learning tools. They do a great job of introducing the basic concepts. Indeed, learning the basic concepts is vital to successfully using pointers. However, in production code, it's common to get into much more complex scenarios than these simple demonstrations can encapsulate.
\n\n
I've been involved with systems where we had structures pointing to other structures pointing to other structures. Some of those structures also contained embedded structures (rather than pointers to additional structures). This is where pointers get really confusing. If you've got multiple levels of indirection, and you start ending up with code like this:
it can get confusing really quickly (imagine a lot more lines, and potentially more levels). Throw in arrays of pointers, and node to node pointers (trees, linked lists) and it gets worse still. I've seen some really good developers get lost once they started working on such systems, even developers who understood the basics really well.
\n\n
Complex structures of pointers don't necessarily indicate poor coding, either (though they can). Composition is a vital piece of good object-oriented programming, and in languages with raw pointers, it will inevitably lead to multi-layered indirection. Further, systems often need to use third-party libraries with structures which don't match each other in style or technique. In situations like that, complexity is naturally going to arise (though certainly, we should fight it as much as possible).
\n\n
I think the best thing colleges can do to help students learn pointers is to to use good demonstrations, combined with projects that require pointer use. One difficult project will do more for pointer understanding than a thousand demonstrations. Demonstrations can get you a shallow understanding, but to deeply grasp pointers, you have to really use them.
I don't think that pointers themselves are confusing. Most people can understand the concept. Now how many pointers can you think about or how many levels of indirection are you comfortable with. It doesn't take too many to put people over the edge. The fact that they can be changed accidently by bugs in your program can also make them very difficult to debug when things go wrong in your code.
Moving on from there, Beej's Guide to Network Programming teaches the Unix sockets API, from which you can begin to do really fun things. http://beej.us/guide/bgnet/
I like the house address analogy, but I've always thought of the address being to the mailbox itself. This way you can visualize the concept of dereferencing the pointer (opening the mailbox).
\n\n
For instance following a linked list:\n1) start with your paper with the address\n2) Go to the address on the paper\n3) Open the mailbox to find a new piece of paper with the next address on it
\n\n
In a linear linked list, the last mailbox has nothing in it (end of the list). In a circular linked list, the last mailbox has the address of the first mailbox in it.
\n\n
Note that step 3 is where the dereference occurs and where you'll crash or go wrong when the address is invalid. Assuming you could walk up to the mailbox of an invalid address, imagine that there's a black hole or something in there that turns the world inside out :)
An analogy I've found helpful for explaining pointers is hyperlinks. Most people can understand that a link on a web page 'points' to another page on the internet, and if you can copy & paste that hyperlink then they will both point to the same original web page. If you go and edit that original page, then follow either of those links (pointers) you'll get that new updated page.
Just to confuse things a bit more, sometimes you have to work with handles instead of pointers. Handles are pointers to pointers, so that the back end can move things in memory to defragment the heap. If the pointer changes in mid-routine, the results are unpredictable, so you first have to lock the handle to make sure nothing goes anywhere.
I think that what makes pointers tricky to learn is that until pointers you're comfortable with the idea that \"at this memory location is a set of bits that represent an int, a double, a character, whatever\".
\n\n
When you first see a pointer, you don't really get what's at that memory location. \"What do you mean, it holds an address?\"
\n\n
I don't agree with the notion that \"you either get them or you don't\".
\n\n
They become easier to understand when you start finding real uses for them (like not passing large structures into functions).
It's a piece of information that allows you to access something else.
\n\n
(And if you do arithmetic on post office box numbers, you may have a problem, because the letter goes in the wrong box. And if somebody moves to another state -- with no forwarding address -- then you have a dangling pointer. On the other hand -- if the post office forwards the mail, then you have a pointer to a pointer.)
The problem with pointers is not the concept. It's the execution and language involved. Additional confusion results when teachers assume that it's the CONCEPT of pointers that's difficult, and not the jargon, or the convoluted mess C and C++ makes of the concept. So vast amounts of effort are poored into explaining the concept (like in the accepted answer for this question) and it's pretty much just wasted on someone like me, because I already understand all of that. It's just explaining the wrong part of the problem.
\n\n
To give you an idea of where I'm coming from, I'm someone who understands pointers perfectly well, and I can use them competently in assembler language. Because in assembler language they are not referred to as pointers. They are referred to as addresses. When it comes to programming and using pointers in C, I make a lot of mistakes and get really confused. I still have not sorted this out. Let me give you an example.
\n\n
When an api says:
\n\n
int doIt(char *buffer )\n//*buffer is a pointer to the buffer\n
\n\n
what does it want?
\n\n
it could want:
\n\n
a number representing an address to a buffer
\n\n
(To give it that, do I say doIt(mybuffer), or doIt(*myBuffer)?)
\n\n
a number representing the address to an address to a buffer
\n\n
(is that doIt(&mybuffer) or doIt(mybuffer) or doIt(*mybuffer)?)
\n\n
a number representing the address to the address to the address to the buffer
\n\n
(maybe that's doIt(&mybuffer). or is it doIt(&&mybuffer) ? or even doIt(&&&mybuffer))
\n\n
and so on, and the language involved doesn't make it as clear because it involves the words \"pointer\" and \"reference\" that don't hold as much meaning and clarity to me as \"x holds the address to y\" and \"this function requires an address to y\". The answer additionally depends on just what the heck \"mybuffer\" is to begin with, and what doIt intends to do with it. The language doesn't support the levels of nesting that are encountered in practice. Like when I have to hand a \"pointer\" in to a function that creates a new buffer, and it modifies the pointer to point at the new location of the buffer. Does it really want the pointer, or a pointer to the pointer, so it knows where to go to modify the contents of the pointer. Most of the time I just have to guess what is meant by \"pointer\" and most of the time I'm wrong, regardless of how much experience I get at guessing.
\n\n
\"Pointer\" is just too overloaded. Is a pointer an address to a value? or is it a variable that holds an address to a value. When a function wants a pointer, does it want the address that the pointer variable holds, or does it want the address to the pointer variable?\nI'm confused.
I think it might actually be a syntax issue. The C/C++ syntax for pointers seems inconsistent and more complex than it needs to be.
\n\n
Ironically, the thing that actually helped me to understand pointers was encountering the concept of an iterator in the c++ Standard Template Library. It's ironic because I can only assume that iterators were conceived as a generalization of the pointer.
\n\n
Sometimes you just can't see the forest until you learn to ignore the trees.
Not a bad way to grasp it, via iterators.. but keep looking you'll see Alexandrescu start complaining about them.
\n\n
Many ex-C++ devs (that never understood that iterators are a modern pointer before dumping the language) jump to C# and still believe they have decent iterators.
\n\n
Hmm, the problem is that all that iterators are is in complete odds at what the runtime platforms (Java/CLR) are trying to achieve: new, simple, everyone-is-a-dev usage. Which can be good, but they said it once in the purple book and they said it even before and before C:
\n\n
Indirection.
\n\n
A very powerful concept but never so if you do it all the way.. Iterators are useful as they help with abstraction of algorithms, another example. And compile-time is the place for an algorithm, very simple. You know code + data, or in that other language C#:
\n\n
IEnumerable + LINQ + Massive Framework = 300MB runtime penalty indirection of lousy, dragging apps via heaps of instances of reference types..
The reason it's so hard to understand is not because it's a difficult concept but because the syntax is inconsistent.
\n
int *mypointer;\n
\n
You are first learned that the leftmost part of a variable creation defines the type of the variable. Pointer declaration does not work like this in C and C++. Instead they say that the variable is pointing on the type to the left. In this case: *mypointer is pointing on an int.
\n
I didn't fully grasp pointers until i tried using them in C# (with unsafe), they work in exact same way but with logical and consistent syntax. The pointer is a type itself. Here mypointer is a pointer to an int.
I could work with pointers when I only knew C++. I kind of knew what to do in some cases and what not to do from trial/error. But the thing that gave me complete understanding is assembly language. If you do some serious instruction level debugging with an assembly language program you've written, you should be able to understand a lot of things.
The confusion comes from the multiple abstraction layers mixed together in the \"pointer\" concept. Programmers don't get confused by ordinary references in Java/Python, but pointers are different in that they expose characteristics of the underlying memory-architecture.
\n\n
It is a good principle to cleanly separate layers of abstraction, and pointers do not do that.
The reason I had a hard time understanding pointers, at first, is that many explanations include a lot of crap about passing by reference. All this does is confuse the issue. When you use a pointer parameter, you're still passing by value; but the value happens to be an address rather than, say, an int.
\n\n
Someone else has already linked to this tutorial, but I can highlight the moment when I began to understand pointers:
For the moment, ignore the const. The parameter passed to puts() is a pointer, that is the value of a pointer (since all parameters in C are passed by value), and the value of a pointer is the address to which it points, or, simply, an address. Thus when we write puts(strA); as we have seen, we are passing the address of strA[0].
\n
\n\n
The moment I read these words, the clouds parted and a beam of sunlight enveloped me with pointer understanding.
\n\n
Even if you're a VB .NET or C# developer (as I am) and never use unsafe code, it's still worth understanding how pointers work, or you won't understand how object references work. Then you'll have the common-but-mistaken notion that passing an object reference to a method copies the object.
I think the main barrier to understanding pointers is bad teachers.
\n\n
Almost everyone are taught lies about pointers: That they are nothing more than memory addresses, or that they allow you to point to arbitrary locations.
\n\n
And of course that they are difficult to understand, dangerous and semi-magical.
\n\n
None of which is true. Pointers are actually fairly simple concepts, as long as you stick to what the C++ language has to say about them and don't imbue them with attributes that \"usually\" turn out to work in practice, but nevertheless aren't guaranteed by the language, and so aren't part of the actual concept of a pointer.
\n\n
I tried to write up an explanation of this a few months ago in this blog post -- hopefully it'll help someone.
\n\n
(Note, before anyone gets pedantic on me, yes, the C++ standard does say that pointers represent memory addresses. But it does not say that \"pointers are memory addresses, and nothing but memory addresses and may be used or thought of interchangeably with memory addresses\". The distinction is important)
Every C/C++ beginner has the same problem and that problem occurs not because \"pointers are hard to learn\" but \"who and how it is explained\". Some learners gather it verbally some visually and the best way of explaining it is to use \"train\" example (suits for verbal and visual example).
\n\n
Where \"locomotive\" is a pointer which can not hold anything and \"wagon\" is what \"locomotive\" tries pull (or point to). After, you can classify the \"wagon\" itself, can it hold animals,plants or people (or a mix of them).
I thought I'd add an analogy to this list that I found very helpful when explaining pointers (back in the day) as a Computer Science Tutor; first, let's:
\n\n\n\n
Set the stage:
\n\n
Consider a parking lot with 3 spaces, these spaces are numbered:
In a way, this is like memory locations, they are sequential and contiguous.. sort of like an array. Right now there are no cars in them so it's like an empty array (parking_lot[3] = {0}).
\n\n\n\n
Add the data
\n\n
A parking lot never stays empty for long... if it did it would be pointless and no one would build any. So let's say as the day moves on the lot fills up with 3 cars, a blue car, a red car, and a green car:
These cars are all the same type (car) so one way to think of this is that our cars are some sort of data (say an int) but they have different values (blue, red, green; that could be an color enum)
\n\n\n\n
Enter the pointer
\n\n
Now if I take you into this parking lot, and ask you to find me a blue car, you extend one finger and use it to point to a blue car in spot 1. This is like taking a pointer and assigning it to a memory address (int *finger = parking_lot)
\n\n
Your finger (the pointer) is not the answer to my question. Looking at your finger tells me nothing, but if I look where you're finger is pointing to (dereferencing the pointer), I can find the car (the data) I was looking for.
\n\n\n\n
Reassigning the pointer
\n\n
Now I can ask you to find a red car instead and you can redirect your finger to a new car. Now your pointer (the same one as before) is showing me new data (the parking spot where the red car can be found) of the same type (the car).
\n\n
The pointer hasn't physically changed, it's still your finger, just the data it was showing me changed. (the \"parking spot\" address)
\n\n\n\n
Double pointers (or a pointer to a pointer)
\n\n
This works with more than one pointer as well. I can ask where is the pointer, which is pointing to the red car and you can use your other hand and point with a finger to the first finger. (this is like int **finger_two = &finger)
\n\n
Now if I want to know where the blue car is I can follow the first finger's direction to the second finger, to the car (the data).
\n\n\n\n
The dangling pointer
\n\n
Now let's say you're feeling very much like a statue, and you want to hold your hand pointing at the red car indefinitely. What if that red car drives away?
Your pointer is still pointing to where the red car was but is no longer. Let's say a new car pulls in there... a Orange car. Now if I ask you again, \"where is the red car\", you're still pointing there, but now you're wrong. That's not an red car, that's orange.
\n\n\n\n
Pointer arithmetic
\n\n
Ok, so you're still pointing at the second parking spot (now occupied by the Orange car)
Well I have a new question now... I want to know the color of the car in the next parking spot. You can see you're pointing at spot 2, so you just add 1 and you're pointing at the next spot. (finger+1), now since I wanted to know what the data was there, you have to check that spot (not just the finger) so you can deference the pointer (*(finger+1)) to see there is a green car present there (the data at that location)
Some answers above have asserted that \"pointers aren't really hard\", but haven't gone on to address directly where \"pointer are hard!\" comes from. Some years back I tutored first year CS students (for only one year, since I clearly sucked at it) and it was clear to me that the idea of pointer is not hard. What's hard is understanding why and when you would want a pointer.
\n\n
I don't think you can divorce that question - why and when to use a pointer - from explaining broader software engineering issues. Why every variable should not be a global variable, and why one should factor out similar code into functions (that, get this, use pointers to specialize their behaviour to their call site).
The way I liked to explain it was in terms of arrays and indexes - people might not be familiar with pointers, but they generally know what an index is.
\n\n
So I say imagine that the RAM is an array (and you have only 10-bytes of RAM):
Then a pointer to a variable is really just the index of (the first byte of) that variable in the RAM.
\n\n
So if you have a pointer/index unsigned char index = 2, then the value is obviously the third element, or the number 4. A pointer to a pointer is where you take that number and use it as an index itself, like RAM[RAM[index]].
\n\n
I would draw an array on a list of paper, and just use it to show things like many pointers pointing to the same memory, pointer arithmetic, pointer to pointer, and so on.
Why are pointers such a leading factor of confusion for many new, and even old, college level students in the C/C++ language?
\\n
\\n\\n
The concept of a placeholder for a value - variables - maps onto something we're taught in school - algebra. There isn't an existing parallel you can draw without understanding how memory is physically laid out within a computer, and no one thinks about this kind of thing until they're dealing with low level things - at the C/C++/byte communications level.
\\n\\n
\\n
Are there any tools or thought processes that helped you understand how pointers work at the variable, function, and beyond level?
\\n
\\n\\n
Addresses boxes. I remember when I was learning to program BASIC into microcomputers, there were these pretty books with games in them, and sometimes you had to poke values into particular addresses. They had a picture of a bunch of boxes, incrementally labelled with 0, 1, 2... and it was explained that only one small thing (a byte) could fit in these boxes, and there were a lot of them - some computers had as many as 65535! They were next to each other, and they all had an address.
\\n\\n
\\n
What are some good practice things that can be done to bring somebody to the level of, \\\"Ah-hah, I got it,\\\" without getting them bogged down in the overall concept? Basically, drill like scenarios.
I don't think pointers as a concept are particularly tricky - most students' mental models map to something like this and some quick box sketches can help.
\\n\\n
The difficulty, at least that which I've experienced in the past and seen others deal with, is that the management of pointers in C/C++ can be unncessarily convoluted.
I don't see what is so confusing about pointers. They point to a location in memory, that is it stores the memory address. In C/C++ you can specify the type the pointer points to. For example:
\\n\\n
int* my_int_pointer;\\n
\\n\\n
Says that my_int_pointer contains the address to a location that contains an int.
\\n\\n
The problem with pointers is that they point to a location in memory, so it is easy to trail off into some location you should not be in. As proof look at the numerous security holes in C/C++ applications from buffer overflow (incrementing the pointer past the allocated boundary).
The reason pointers seem to confuse so many people is that they mostly come with little or no background in computer architecture. Since many don't seem to have an idea of how computers (the machine) is actually implemented - working in C/C++ seems alien.
\\n\\n
A drill is to ask them to implement a simple bytecode based virtual machine (in any language they chose, python works great for this) with an instruction set focussed on pointer operations (load, store, direct/indirect addressing). Then ask them to write simple programs for that instruction set.
\\n\\n
Anything requiring slightly more than simple addition is going to involve pointers and they are sure to get it.
I think that the main reason that people have trouble with it is because it's generally not taught in an interesting and engaging manner. I'd like to see a lecturer get 10 volunteers from the crowd and give them a 1 meter ruler each, get them to stand around in a certain configuration and use the rulers to point at each other. Then show pointer arithmetic by moving people around (and where they point their rulers). It'd be a simple but effective (and above all memorable) way of showing the concepts without getting too bogged down in the mechanics.
\\n\\n
Once you get to C and C++ it seems to get harder for some people. I'm not sure if this is because they are finally putting theory that they don't properly grasp into practice or because pointer manipulation is inherently harder in those languages. I can't remember my own transition that well, but I knew pointers in Pascal and then moved to C and got totally lost.
Pointers is a concept that for many can be confusing at first, in particular when it comes to copying pointer values around and still referencing the same memory block.
\\n\\n
I've found that the best analogy is to consider the pointer as a piece of paper with a house address on it, and the memory block it references as the actual house. All sorts of operations can thus be easily explained.
\\n\\n
I've added some Delphi code down below, and some comments where appropriate. I chose Delphi since my other main programming language, C#, does not exhibit things like memory leaks in the same way.
\\n\\n
If you only wish to learn the high-level concept of pointers, then you should ignore the parts labelled \\\"Memory layout\\\" in the explanation below. They are intended to give examples of what memory could look like after operations, but they are more low-level in nature. However, in order to accurately explain how buffer overruns really work, it was important that I added these diagrams.
\\n\\n
Disclaimer: For all intents and purposes, this explanation and the example memory\\nlayouts are vastly simplified. There's more overhead and a lot more details you would\\nneed to know if you need to deal with memory on a low-level basis. However, for the\\nintents of explaining memory and pointers, it is accurate enough.
\\n\\n\\n\\n
Let's assume the THouse class used below looks like this:
When you initialize the house object, the name given to the constructor is copied into the private field FName. There is a reason it is defined as a fixed-size array.
\\n\\n
In memory, there will be some overhead associated with the house allocation, I'll illustrate this below like this:
The \\\"tttt\\\" area is overhead, there will typically be more of this for various types of runtimes and languages, like 8 or 12 bytes. It is imperative that whatever values are stored in this area never gets changed by anything other than the memory allocator or the core system routines, or you risk crashing the program.
\\n\\n\\n\\n
Allocate memory
\\n\\n
Get an entrepreneur to build your house, and give you the address to the house. In contrast to the real world, memory allocation cannot be told where to allocate, but will find a suitable spot with enough room, and report back the address to the allocated memory.
\\n\\n
In other words, the entrepreneur will choose the spot.
\\n\\n
THouse.Create('My house');\\n
\\n\\n
Memory layout:
\\n\\n
\\n---[ttttNNNNNNNNNN]---\\n 1234My house\\n
\\n\\n\\n\\n
Keep a variable with the address
\\n\\n
Write the address to your new house down on a piece of paper. This paper will serve as your reference to your house. Without this piece of paper, you're lost, and cannot find the house, unless you're already in it.
\\n\\n
var\\n h: THouse;\\nbegin\\n h := THouse.Create('My house');\\n ...\\n
Just write the address on a new piece of paper. You now have two pieces of paper that will get you to the same house, not two separate houses. Any attempts to follow the address from one paper and rearrange the furniture at that house will make it seem that the other house has been modified in the same manner, unless you can explicitly detect that it's actually just one house.
\\n\\n
Note This is usually the concept that I have the most problem explaining to people, two pointers does not mean two objects or memory blocks.
\\n\\n
var\\n h1, h2: THouse;\\nbegin\\n h1 := THouse.Create('My house');\\n h2 := h1; // copies the address, not the house\\n ...\\n
Demolish the house. You can then later on reuse the paper for a new address if you so wish, or clear it to forget the address to the house that no longer exists.
\\n\\n
var\\n h: THouse;\\nbegin\\n h := THouse.Create('My house');\\n ...\\n h.Free;\\n h := nil;\\n
\\n\\n
Here I first construct the house, and get hold of its address. Then I do something to the house (use it, the ... code, left as an exercise for the reader), and then I free it. Lastly I clear the address from my variable.
\\n\\n
Memory layout:
\\n\\n
\\n h <--+\\n v +- before free\\n---[ttttNNNNNNNNNN]--- |\\n 1234My house <--+\\n\\n h (now points nowhere) <--+\\n +- after free\\n---------------------- | (note, memory might still\\n xx34My house <--+ contain some data)\\n
\\n\\n\\n\\n
Dangling pointers
\\n\\n
You tell your entrepreneur to destroy the house, but you forget to erase the address from your piece of paper. When later on you look at the piece of paper, you've forgotten that the house is no longer there, and goes to visit it, with failed results (see also the part about an invalid reference below).
\\n\\n
var\\n h: THouse;\\nbegin\\n h := THouse.Create('My house');\\n ...\\n h.Free;\\n ... // forgot to clear h here\\n h.OpenFrontDoor; // will most likely fail\\n
\\n\\n
Using h after the call to .Freemight work, but that is just pure luck. Most likely it will fail, at a customers place, in the middle of a critical operation.
\\n\\n
\\n h <--+\\n v +- before free\\n---[ttttNNNNNNNNNN]--- |\\n 1234My house <--+\\n\\n h <--+\\n v +- after free\\n---------------------- |\\n xx34My house <--+\\n
\\n\\n
As you can see, h still points to the remnants of the data in memory, but\\nsince it might not be complete, using it as before might fail.
\\n\\n\\n\\n
Memory leak
\\n\\n
You lose the piece of paper and cannot find the house. The house is still standing somewhere though, and when you later on want to construct a new house, you cannot reuse that spot.
\\n\\n
var\\n h: THouse;\\nbegin\\n h := THouse.Create('My house');\\n h := THouse.Create('My house'); // uh-oh, what happened to our first house?\\n ...\\n h.Free;\\n h := nil;\\n
\\n\\n
Here we overwrote the contents of the h variable with the address of a new house, but the old one is still standing... somewhere. After this code, there is no way to reach that house, and it will be left standing. In other words, the allocated memory will stay allocated until the application closes, at which point the operating system will tear it down.
\\n h\\n v\\n---[ttttNNNNNNNNNN]---[ttttNNNNNNNNNN]\\n 1234My house 5678My house\\n
\\n\\n
A more common way to get this method is just to forget to free something, instead of overwriting it as above. In Delphi terms, this will occur with the following method:
\\n\\n
procedure OpenTheFrontDoorOfANewHouse;\\nvar\\n h: THouse;\\nbegin\\n h := THouse.Create('My house');\\n h.OpenFrontDoor;\\n // uh-oh, no .Free here, where does the address go?\\nend;\\n
\\n\\n
After this method has executed, there's no place in our variables that the address to the house exists, but the house is still out there.
\\n\\n
Memory layout:
\\n\\n
\\n h <--+\\n v +- before losing pointer\\n---[ttttNNNNNNNNNN]--- |\\n 1234My house <--+\\n\\n h (now points nowhere) <--+\\n +- after losing pointer\\n---[ttttNNNNNNNNNN]--- |\\n 1234My house <--+\\n
\\n\\n
As you can see, the old data is left intact in memory, and will not\\nbe reused by the memory allocator. The allocator keeps track of which\\nareas of memory has been used, and will not reuse them unless you\\nfree it.
\\n\\n\\n\\n
Freeing the memory but keeping a (now invalid) reference
\\n\\n
Demolish the house, erase one of the pieces of paper but you also have another piece of paper with the old address on it, when you go to the address, you won't find a house, but you might find something that resembles the ruins of one.
\\n\\n
Perhaps you will even find a house, but it is not the house you were originally given the address to, and thus any attempts to use it as though it belongs to you might fail horribly.
\\n\\n
Sometimes you might even find that a neighbouring address has a rather big house set up on it that occupies three address (Main Street 1-3), and your address goes to the middle of the house. Any attempts to treat that part of the large 3-address house as a single small house might also fail horribly.
\\n\\n
var\\n h1, h2: THouse;\\nbegin\\n h1 := THouse.Create('My house');\\n h2 := h1; // copies the address, not the house\\n ...\\n h1.Free;\\n h1 := nil;\\n h2.OpenFrontDoor; // uh-oh, what happened to our house?\\n
\\n\\n
Here the house was torn down, through the reference in h1, and while h1 was cleared as well, h2 still has the old, out-of-date, address. Access to the house that is no longer standing might or might not work.
\\n\\n
This is a variation of the dangling pointer above. See its memory layout.
\\n\\n\\n\\n
Buffer overrun
\\n\\n
You move more stuff into the house than you can possibly fit, spilling into the neighbours house or yard. When the owner of that neighbouring house later on comes home, he'll find all sorts of things he'll consider his own.
\\n\\n
This is the reason I chose a fixed-size array. To set the stage, assume that\\nthe second house we allocate will, for some reason, be placed before the\\nfirst one in memory. In other words, the second house will have a lower\\naddress than the first one. Also, they're allocated right next to each other.
\\n\\n
Thus, this code:
\\n\\n
var\\n h1, h2: THouse;\\nbegin\\n h1 := THouse.Create('My house');\\n h2 := THouse.Create('My other house somewhere');\\n ^-----------------------^\\n longer than 10 characters\\n 0123456789 <-- 10 characters\\n
\\n h2 h1\\n v v\\n---[ttttNNNNNNNNNN]----[ttttNNNNNNNNNN]\\n 1234My other house somewhereouse\\n ^---+--^\\n |\\n +- overwritten\\n
\\n\\n
The part that will most often cause crash is when you overwrite important parts\\nof the data you stored that really should not be randomly changed. For instance\\nit might not be a problem that parts of the name of the h1-house was changed,\\nin terms of crashing the program, but overwriting the overhead of the\\nobject will most likely crash when you try to use the broken object,\\nas will overwriting links that is stored to\\nother objects in the object.
\\n\\n\\n\\n
Linked lists
\\n\\n
When you follow an address on a piece of paper, you get to a house, and at that house there is another piece of paper with a new address on it, for the next house in the chain, and so on.
Here we create a link from our home house to our cabin. We can follow the chain until a house has no NextHouse reference, which means it's the last one. To visit all our houses, we could use the following code:
\\n\\n
var\\n h1, h2: THouse;\\n h: THouse;\\nbegin\\n h1 := THouse.Create('Home');\\n h2 := THouse.Create('Cabin');\\n h1.NextHouse := h2;\\n ...\\n h := h1;\\n while h <> nil do\\n begin\\n h.LockAllDoors;\\n h.CloseAllWindows;\\n h := h.NextHouse;\\n end;\\n
\\n\\n
Memory layout (added NextHouse as a link in the object, noted with\\nthe four LLLL's in the below diagram):
\\n\\n
\\n h1 h2\\n v v\\n---[ttttNNNNNNNNNNLLLL]----[ttttNNNNNNNNNNLLLL]\\n 1234Home + 5678Cabin +\\n | ^ |\\n +--------+ * (no link)\\n
\\n\\n\\n\\n
In basic terms, what is a memory address?
\\n\\n
A memory address is in basic terms just a number. If you think of memory\\nas a big array of bytes, the very first byte has the address 0, the next one\\nthe address 1 and so on upwards. This is simplified, but good enough.
\\n\\n
So this memory layout:
\\n\\n
\\n h1 h2\\n v v\\n---[ttttNNNNNNNNNN]---[ttttNNNNNNNNNN]\\n 1234My house 5678My house\\n
\\n\\n
Might have these two address (the leftmost - is address 0):
\\n\\n
\\n
h1 = 4
\\n
h2 = 23
\\n
\\n\\n
Which means that our linked list above might actuall look like this:
\\n\\n
\\n h1 (=4) h2 (=28)\\n v v\\n---[ttttNNNNNNNNNNLLLL]----[ttttNNNNNNNNNNLLLL]\\n 1234Home 0028 5678Cabin 0000\\n | ^ |\\n +--------+ * (no link)\\n
\\n\\n
It is typical to store an address that \\\"points nowhere\\\" as a zero-address.
\\n\\n\\n\\n
In basic terms, what is a pointer?
\\n\\n
A pointer is just a variable holding a memory address. You can typically ask the programming\\nlanguage to give you its number, but most programming languages and runtimes tries to\\nhide the fact that there is a number beneath, just because the number itself does not\\nreally hold any meaning to you. It is best to think of a pointer as a black box, ie.\\nyou don't really know or care about how it is actually implemented, just as long as it\\nworks.
For some reason most people seem to be born without the part of the brain that understands pointers. This is an aptitude thing, not a skill thing – it requires a complex form of doubly-indirected thinking that some people just can't do.
The complexities of pointers go beyond what we can easily teach. Having students point to each other and using pieces of paper with house addresses are both great learning tools. They do a great job of introducing the basic concepts. Indeed, learning the basic concepts is vital to successfully using pointers. However, in production code, it's common to get into much more complex scenarios than these simple demonstrations can encapsulate.
\\n\\n
I've been involved with systems where we had structures pointing to other structures pointing to other structures. Some of those structures also contained embedded structures (rather than pointers to additional structures). This is where pointers get really confusing. If you've got multiple levels of indirection, and you start ending up with code like this:
it can get confusing really quickly (imagine a lot more lines, and potentially more levels). Throw in arrays of pointers, and node to node pointers (trees, linked lists) and it gets worse still. I've seen some really good developers get lost once they started working on such systems, even developers who understood the basics really well.
\\n\\n
Complex structures of pointers don't necessarily indicate poor coding, either (though they can). Composition is a vital piece of good object-oriented programming, and in languages with raw pointers, it will inevitably lead to multi-layered indirection. Further, systems often need to use third-party libraries with structures which don't match each other in style or technique. In situations like that, complexity is naturally going to arise (though certainly, we should fight it as much as possible).
\\n\\n
I think the best thing colleges can do to help students learn pointers is to to use good demonstrations, combined with projects that require pointer use. One difficult project will do more for pointer understanding than a thousand demonstrations. Demonstrations can get you a shallow understanding, but to deeply grasp pointers, you have to really use them.
I don't think that pointers themselves are confusing. Most people can understand the concept. Now how many pointers can you think about or how many levels of indirection are you comfortable with. It doesn't take too many to put people over the edge. The fact that they can be changed accidently by bugs in your program can also make them very difficult to debug when things go wrong in your code.
Moving on from there, Beej's Guide to Network Programming teaches the Unix sockets API, from which you can begin to do really fun things. http://beej.us/guide/bgnet/
I like the house address analogy, but I've always thought of the address being to the mailbox itself. This way you can visualize the concept of dereferencing the pointer (opening the mailbox).
\\n\\n
For instance following a linked list:\\n1) start with your paper with the address\\n2) Go to the address on the paper\\n3) Open the mailbox to find a new piece of paper with the next address on it
\\n\\n
In a linear linked list, the last mailbox has nothing in it (end of the list). In a circular linked list, the last mailbox has the address of the first mailbox in it.
\\n\\n
Note that step 3 is where the dereference occurs and where you'll crash or go wrong when the address is invalid. Assuming you could walk up to the mailbox of an invalid address, imagine that there's a black hole or something in there that turns the world inside out :)
An analogy I've found helpful for explaining pointers is hyperlinks. Most people can understand that a link on a web page 'points' to another page on the internet, and if you can copy & paste that hyperlink then they will both point to the same original web page. If you go and edit that original page, then follow either of those links (pointers) you'll get that new updated page.
Just to confuse things a bit more, sometimes you have to work with handles instead of pointers. Handles are pointers to pointers, so that the back end can move things in memory to defragment the heap. If the pointer changes in mid-routine, the results are unpredictable, so you first have to lock the handle to make sure nothing goes anywhere.
I think that what makes pointers tricky to learn is that until pointers you're comfortable with the idea that \\\"at this memory location is a set of bits that represent an int, a double, a character, whatever\\\".
\\n\\n
When you first see a pointer, you don't really get what's at that memory location. \\\"What do you mean, it holds an address?\\\"
\\n\\n
I don't agree with the notion that \\\"you either get them or you don't\\\".
\\n\\n
They become easier to understand when you start finding real uses for them (like not passing large structures into functions).
It's a piece of information that allows you to access something else.
\\n\\n
(And if you do arithmetic on post office box numbers, you may have a problem, because the letter goes in the wrong box. And if somebody moves to another state -- with no forwarding address -- then you have a dangling pointer. On the other hand -- if the post office forwards the mail, then you have a pointer to a pointer.)
The problem with pointers is not the concept. It's the execution and language involved. Additional confusion results when teachers assume that it's the CONCEPT of pointers that's difficult, and not the jargon, or the convoluted mess C and C++ makes of the concept. So vast amounts of effort are poored into explaining the concept (like in the accepted answer for this question) and it's pretty much just wasted on someone like me, because I already understand all of that. It's just explaining the wrong part of the problem.
\\n\\n
To give you an idea of where I'm coming from, I'm someone who understands pointers perfectly well, and I can use them competently in assembler language. Because in assembler language they are not referred to as pointers. They are referred to as addresses. When it comes to programming and using pointers in C, I make a lot of mistakes and get really confused. I still have not sorted this out. Let me give you an example.
\\n\\n
When an api says:
\\n\\n
int doIt(char *buffer )\\n//*buffer is a pointer to the buffer\\n
\\n\\n
what does it want?
\\n\\n
it could want:
\\n\\n
a number representing an address to a buffer
\\n\\n
(To give it that, do I say doIt(mybuffer), or doIt(*myBuffer)?)
\\n\\n
a number representing the address to an address to a buffer
\\n\\n
(is that doIt(&mybuffer) or doIt(mybuffer) or doIt(*mybuffer)?)
\\n\\n
a number representing the address to the address to the address to the buffer
\\n\\n
(maybe that's doIt(&mybuffer). or is it doIt(&&mybuffer) ? or even doIt(&&&mybuffer))
\\n\\n
and so on, and the language involved doesn't make it as clear because it involves the words \\\"pointer\\\" and \\\"reference\\\" that don't hold as much meaning and clarity to me as \\\"x holds the address to y\\\" and \\\"this function requires an address to y\\\". The answer additionally depends on just what the heck \\\"mybuffer\\\" is to begin with, and what doIt intends to do with it. The language doesn't support the levels of nesting that are encountered in practice. Like when I have to hand a \\\"pointer\\\" in to a function that creates a new buffer, and it modifies the pointer to point at the new location of the buffer. Does it really want the pointer, or a pointer to the pointer, so it knows where to go to modify the contents of the pointer. Most of the time I just have to guess what is meant by \\\"pointer\\\" and most of the time I'm wrong, regardless of how much experience I get at guessing.
\\n\\n
\\\"Pointer\\\" is just too overloaded. Is a pointer an address to a value? or is it a variable that holds an address to a value. When a function wants a pointer, does it want the address that the pointer variable holds, or does it want the address to the pointer variable?\\nI'm confused.
I think it might actually be a syntax issue. The C/C++ syntax for pointers seems inconsistent and more complex than it needs to be.
\\n\\n
Ironically, the thing that actually helped me to understand pointers was encountering the concept of an iterator in the c++ Standard Template Library. It's ironic because I can only assume that iterators were conceived as a generalization of the pointer.
\\n\\n
Sometimes you just can't see the forest until you learn to ignore the trees.
Not a bad way to grasp it, via iterators.. but keep looking you'll see Alexandrescu start complaining about them.
\\n\\n
Many ex-C++ devs (that never understood that iterators are a modern pointer before dumping the language) jump to C# and still believe they have decent iterators.
\\n\\n
Hmm, the problem is that all that iterators are is in complete odds at what the runtime platforms (Java/CLR) are trying to achieve: new, simple, everyone-is-a-dev usage. Which can be good, but they said it once in the purple book and they said it even before and before C:
\\n\\n
Indirection.
\\n\\n
A very powerful concept but never so if you do it all the way.. Iterators are useful as they help with abstraction of algorithms, another example. And compile-time is the place for an algorithm, very simple. You know code + data, or in that other language C#:
\\n\\n
IEnumerable + LINQ + Massive Framework = 300MB runtime penalty indirection of lousy, dragging apps via heaps of instances of reference types..
The reason it's so hard to understand is not because it's a difficult concept but because the syntax is inconsistent.
\\n
int *mypointer;\\n
\\n
You are first learned that the leftmost part of a variable creation defines the type of the variable. Pointer declaration does not work like this in C and C++. Instead they say that the variable is pointing on the type to the left. In this case: *mypointer is pointing on an int.
\\n
I didn't fully grasp pointers until i tried using them in C# (with unsafe), they work in exact same way but with logical and consistent syntax. The pointer is a type itself. Here mypointer is a pointer to an int.
I could work with pointers when I only knew C++. I kind of knew what to do in some cases and what not to do from trial/error. But the thing that gave me complete understanding is assembly language. If you do some serious instruction level debugging with an assembly language program you've written, you should be able to understand a lot of things.
The confusion comes from the multiple abstraction layers mixed together in the \\\"pointer\\\" concept. Programmers don't get confused by ordinary references in Java/Python, but pointers are different in that they expose characteristics of the underlying memory-architecture.
\\n\\n
It is a good principle to cleanly separate layers of abstraction, and pointers do not do that.
The reason I had a hard time understanding pointers, at first, is that many explanations include a lot of crap about passing by reference. All this does is confuse the issue. When you use a pointer parameter, you're still passing by value; but the value happens to be an address rather than, say, an int.
\\n\\n
Someone else has already linked to this tutorial, but I can highlight the moment when I began to understand pointers:
For the moment, ignore the const. The parameter passed to puts() is a pointer, that is the value of a pointer (since all parameters in C are passed by value), and the value of a pointer is the address to which it points, or, simply, an address. Thus when we write puts(strA); as we have seen, we are passing the address of strA[0].
\\n
\\n\\n
The moment I read these words, the clouds parted and a beam of sunlight enveloped me with pointer understanding.
\\n\\n
Even if you're a VB .NET or C# developer (as I am) and never use unsafe code, it's still worth understanding how pointers work, or you won't understand how object references work. Then you'll have the common-but-mistaken notion that passing an object reference to a method copies the object.
I think the main barrier to understanding pointers is bad teachers.
\\n\\n
Almost everyone are taught lies about pointers: That they are nothing more than memory addresses, or that they allow you to point to arbitrary locations.
\\n\\n
And of course that they are difficult to understand, dangerous and semi-magical.
\\n\\n
None of which is true. Pointers are actually fairly simple concepts, as long as you stick to what the C++ language has to say about them and don't imbue them with attributes that \\\"usually\\\" turn out to work in practice, but nevertheless aren't guaranteed by the language, and so aren't part of the actual concept of a pointer.
\\n\\n
I tried to write up an explanation of this a few months ago in this blog post -- hopefully it'll help someone.
\\n\\n
(Note, before anyone gets pedantic on me, yes, the C++ standard does say that pointers represent memory addresses. But it does not say that \\\"pointers are memory addresses, and nothing but memory addresses and may be used or thought of interchangeably with memory addresses\\\". The distinction is important)
Every C/C++ beginner has the same problem and that problem occurs not because \\\"pointers are hard to learn\\\" but \\\"who and how it is explained\\\". Some learners gather it verbally some visually and the best way of explaining it is to use \\\"train\\\" example (suits for verbal and visual example).
\\n\\n
Where \\\"locomotive\\\" is a pointer which can not hold anything and \\\"wagon\\\" is what \\\"locomotive\\\" tries pull (or point to). After, you can classify the \\\"wagon\\\" itself, can it hold animals,plants or people (or a mix of them).
I thought I'd add an analogy to this list that I found very helpful when explaining pointers (back in the day) as a Computer Science Tutor; first, let's:
\\n\\n\\n\\n
Set the stage:
\\n\\n
Consider a parking lot with 3 spaces, these spaces are numbered:
In a way, this is like memory locations, they are sequential and contiguous.. sort of like an array. Right now there are no cars in them so it's like an empty array (parking_lot[3] = {0}).
\\n\\n\\n\\n
Add the data
\\n\\n
A parking lot never stays empty for long... if it did it would be pointless and no one would build any. So let's say as the day moves on the lot fills up with 3 cars, a blue car, a red car, and a green car:
These cars are all the same type (car) so one way to think of this is that our cars are some sort of data (say an int) but they have different values (blue, red, green; that could be an color enum)
\\n\\n\\n\\n
Enter the pointer
\\n\\n
Now if I take you into this parking lot, and ask you to find me a blue car, you extend one finger and use it to point to a blue car in spot 1. This is like taking a pointer and assigning it to a memory address (int *finger = parking_lot)
\\n\\n
Your finger (the pointer) is not the answer to my question. Looking at your finger tells me nothing, but if I look where you're finger is pointing to (dereferencing the pointer), I can find the car (the data) I was looking for.
\\n\\n\\n\\n
Reassigning the pointer
\\n\\n
Now I can ask you to find a red car instead and you can redirect your finger to a new car. Now your pointer (the same one as before) is showing me new data (the parking spot where the red car can be found) of the same type (the car).
\\n\\n
The pointer hasn't physically changed, it's still your finger, just the data it was showing me changed. (the \\\"parking spot\\\" address)
\\n\\n\\n\\n
Double pointers (or a pointer to a pointer)
\\n\\n
This works with more than one pointer as well. I can ask where is the pointer, which is pointing to the red car and you can use your other hand and point with a finger to the first finger. (this is like int **finger_two = &finger)
\\n\\n
Now if I want to know where the blue car is I can follow the first finger's direction to the second finger, to the car (the data).
\\n\\n\\n\\n
The dangling pointer
\\n\\n
Now let's say you're feeling very much like a statue, and you want to hold your hand pointing at the red car indefinitely. What if that red car drives away?
Your pointer is still pointing to where the red car was but is no longer. Let's say a new car pulls in there... a Orange car. Now if I ask you again, \\\"where is the red car\\\", you're still pointing there, but now you're wrong. That's not an red car, that's orange.
\\n\\n\\n\\n
Pointer arithmetic
\\n\\n
Ok, so you're still pointing at the second parking spot (now occupied by the Orange car)
Well I have a new question now... I want to know the color of the car in the next parking spot. You can see you're pointing at spot 2, so you just add 1 and you're pointing at the next spot. (finger+1), now since I wanted to know what the data was there, you have to check that spot (not just the finger) so you can deference the pointer (*(finger+1)) to see there is a green car present there (the data at that location)
Some answers above have asserted that \\\"pointers aren't really hard\\\", but haven't gone on to address directly where \\\"pointer are hard!\\\" comes from. Some years back I tutored first year CS students (for only one year, since I clearly sucked at it) and it was clear to me that the idea of pointer is not hard. What's hard is understanding why and when you would want a pointer.
\\n\\n
I don't think you can divorce that question - why and when to use a pointer - from explaining broader software engineering issues. Why every variable should not be a global variable, and why one should factor out similar code into functions (that, get this, use pointers to specialize their behaviour to their call site).
The way I liked to explain it was in terms of arrays and indexes - people might not be familiar with pointers, but they generally know what an index is.
\\n\\n
So I say imagine that the RAM is an array (and you have only 10-bytes of RAM):
Then a pointer to a variable is really just the index of (the first byte of) that variable in the RAM.
\\n\\n
So if you have a pointer/index unsigned char index = 2, then the value is obviously the third element, or the number 4. A pointer to a pointer is where you take that number and use it as an index itself, like RAM[RAM[index]].
\\n\\n
I would draw an array on a list of paper, and just use it to show things like many pointers pointing to the same memory, pointer arithmetic, pointer to pointer, and so on.
\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/08"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/5727","https://Stackoverflow.com","https://Stackoverflow.com/users/568/"],"string":"[\n \"https://Stackoverflow.com/questions/5727\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/568/\"\n]"},"input":{"kind":"string","value":"Why are pointers such a leading factor of confusion for many new, and even old, college level students in C or C++? Are there any tools or thought processes that helped you understand how pointers work at the variable, function, and beyond level?\n\nWhat are some good practice things that can be done to bring somebody to the level of, \"Ah-hah, I got it,\" without getting them bogged down in the overall concept? Basically, drill like scenarios."},"output":{"kind":"string","value":"Pointers is a concept that for many can be confusing at first, in particular when it comes to copying pointer values around and still referencing the same memory block.\n\nI've found that the best analogy is to consider the pointer as a piece of paper with a house address on it, and the memory block it references as the actual house. All sorts of operations can thus be easily explained.\n\nI've added some Delphi code down below, and some comments where appropriate. I chose Delphi since my other main programming language, C#, does not exhibit things like memory leaks in the same way.\n\nIf you only wish to learn the high-level concept of pointers, then you should ignore the parts labelled \"Memory layout\" in the explanation below. They are intended to give examples of what memory could look like after operations, but they are more low-level in nature. However, in order to accurately explain how buffer overruns really work, it was important that I added these diagrams.\n\n*Disclaimer: For all intents and purposes, this explanation and the example memory\nlayouts are vastly simplified. There's more overhead and a lot more details you would\nneed to know if you need to deal with memory on a low-level basis. However, for the\nintents of explaining memory and pointers, it is accurate enough.*\n\n---\n\nLet's assume the THouse class used below looks like this:\n\n```\ntype\n THouse = class\n private\n FName : array[0..9] of Char;\n public\n constructor Create(name: PChar);\n end;\n\n```\n\nWhen you initialize the house object, the name given to the constructor is copied into the private field FName. There is a reason it is defined as a fixed-size array.\n\nIn memory, there will be some overhead associated with the house allocation, I'll illustrate this below like this:\n\n```\n\n---[ttttNNNNNNNNNN]---\n ^ ^\n | |\n | +- the FName array\n |\n +- overhead\n\n```\n\nThe \"tttt\" area is overhead, there will typically be more of this for various types of runtimes and languages, like 8 or 12 bytes. It is imperative that whatever values are stored in this area never gets changed by anything other than the memory allocator or the core system routines, or you risk crashing the program.\n\n---\n\n**Allocate memory**\n\nGet an entrepreneur to build your house, and give you the address to the house. In contrast to the real world, memory allocation cannot be told where to allocate, but will find a suitable spot with enough room, and report back the address to the allocated memory.\n\nIn other words, the entrepreneur will choose the spot.\n\n```\nTHouse.Create('My house');\n\n```\n\nMemory layout:\n\n```\n\n---[ttttNNNNNNNNNN]---\n 1234My house\n\n```\n\n---\n\n**Keep a variable with the address**\n\nWrite the address to your new house down on a piece of paper. This paper will serve as your reference to your house. Without this piece of paper, you're lost, and cannot find the house, unless you're already in it.\n\n```\nvar\n h: THouse;\nbegin\n h := THouse.Create('My house');\n ...\n\n```\n\nMemory layout:\n\n```\n\n h\n v\n---[ttttNNNNNNNNNN]---\n 1234My house\n\n```\n\n---\n\n**Copy pointer value** \n\nJust write the address on a new piece of paper. You now have two pieces of paper that will get you to the same house, not two separate houses. Any attempts to follow the address from one paper and rearrange the furniture at that house will make it seem that *the other house* has been modified in the same manner, unless you can explicitly detect that it's actually just one house.\n\n*Note* This is usually the concept that I have the most problem explaining to people, two pointers does not mean two objects or memory blocks.\n\n```\nvar\n h1, h2: THouse;\nbegin\n h1 := THouse.Create('My house');\n h2 := h1; // copies the address, not the house\n ...\n\n```\n\n```\n\n h1\n v\n---[ttttNNNNNNNNNN]---\n 1234My house\n ^\n h2\n\n```\n\n---\n\n**Freeing the memory** \n\nDemolish the house. You can then later on reuse the paper for a new address if you so wish, or clear it to forget the address to the house that no longer exists.\n\n```\nvar\n h: THouse;\nbegin\n h := THouse.Create('My house');\n ...\n h.Free;\n h := nil;\n\n```\n\nHere I first construct the house, and get hold of its address. Then I do something to the house (use it, the ... code, left as an exercise for the reader), and then I free it. Lastly I clear the address from my variable.\n\nMemory layout:\n\n```\n\n h <--+\n v +- before free\n---[ttttNNNNNNNNNN]--- |\n 1234My house <--+\n\n h (now points nowhere) <--+\n +- after free\n---------------------- | (note, memory might still\n xx34My house <--+ contain some data)\n\n```\n\n---\n\n**Dangling pointers**\n\nYou tell your entrepreneur to destroy the house, but you forget to erase the address from your piece of paper. When later on you look at the piece of paper, you've forgotten that the house is no longer there, and goes to visit it, with failed results (see also the part about an invalid reference below).\n\n```\nvar\n h: THouse;\nbegin\n h := THouse.Create('My house');\n ...\n h.Free;\n ... // forgot to clear h here\n h.OpenFrontDoor; // will most likely fail\n\n```\n\nUsing `h` after the call to `.Free` *might* work, but that is just pure luck. Most likely it will fail, at a customers place, in the middle of a critical operation.\n\n```\n\n h <--+\n v +- before free\n---[ttttNNNNNNNNNN]--- |\n 1234My house <--+\n\n h <--+\n v +- after free\n---------------------- |\n xx34My house <--+\n\n```\n\nAs you can see, h still points to the remnants of the data in memory, but\nsince it might not be complete, using it as before might fail.\n\n---\n\n**Memory leak** \n\nYou lose the piece of paper and cannot find the house. The house is still standing somewhere though, and when you later on want to construct a new house, you cannot reuse that spot.\n\n```\nvar\n h: THouse;\nbegin\n h := THouse.Create('My house');\n h := THouse.Create('My house'); // uh-oh, what happened to our first house?\n ...\n h.Free;\n h := nil;\n\n```\n\nHere we overwrote the contents of the `h` variable with the address of a new house, but the old one is still standing... somewhere. After this code, there is no way to reach that house, and it will be left standing. In other words, the allocated memory will stay allocated until the application closes, at which point the operating system will tear it down.\n\nMemory layout after first allocation:\n\n```\n\n h\n v\n---[ttttNNNNNNNNNN]---\n 1234My house\n\n```\n\nMemory layout after second allocation:\n\n```\n\n h\n v\n---[ttttNNNNNNNNNN]---[ttttNNNNNNNNNN]\n 1234My house 5678My house\n\n```\n\nA more common way to get this method is just to forget to free something, instead of overwriting it as above. In Delphi terms, this will occur with the following method:\n\n```\nprocedure OpenTheFrontDoorOfANewHouse;\nvar\n h: THouse;\nbegin\n h := THouse.Create('My house');\n h.OpenFrontDoor;\n // uh-oh, no .Free here, where does the address go?\nend;\n\n```\n\nAfter this method has executed, there's no place in our variables that the address to the house exists, but the house is still out there.\n\nMemory layout:\n\n```\n\n h <--+\n v +- before losing pointer\n---[ttttNNNNNNNNNN]--- |\n 1234My house <--+\n\n h (now points nowhere) <--+\n +- after losing pointer\n---[ttttNNNNNNNNNN]--- |\n 1234My house <--+\n\n```\n\nAs you can see, the old data is left intact in memory, and will not\nbe reused by the memory allocator. The allocator keeps track of which\nareas of memory has been used, and will not reuse them unless you\nfree it.\n\n---\n\n**Freeing the memory but keeping a (now invalid) reference** \n\nDemolish the house, erase one of the pieces of paper but you also have another piece of paper with the old address on it, when you go to the address, you won't find a house, but you might find something that resembles the ruins of one.\n\nPerhaps you will even find a house, but it is not the house you were originally given the address to, and thus any attempts to use it as though it belongs to you might fail horribly.\n\nSometimes you might even find that a neighbouring address has a rather big house set up on it that occupies three address (Main Street 1-3), and your address goes to the middle of the house. Any attempts to treat that part of the large 3-address house as a single small house might also fail horribly.\n\n```\nvar\n h1, h2: THouse;\nbegin\n h1 := THouse.Create('My house');\n h2 := h1; // copies the address, not the house\n ...\n h1.Free;\n h1 := nil;\n h2.OpenFrontDoor; // uh-oh, what happened to our house?\n\n```\n\nHere the house was torn down, through the reference in `h1`, and while `h1` was cleared as well, `h2` still has the old, out-of-date, address. Access to the house that is no longer standing might or might not work.\n\nThis is a variation of the dangling pointer above. See its memory layout.\n\n---\n\n**Buffer overrun** \n\nYou move more stuff into the house than you can possibly fit, spilling into the neighbours house or yard. When the owner of that neighbouring house later on comes home, he'll find all sorts of things he'll consider his own.\n\nThis is the reason I chose a fixed-size array. To set the stage, assume that\nthe second house we allocate will, for some reason, be placed before the\nfirst one in memory. In other words, the second house will have a lower\naddress than the first one. Also, they're allocated right next to each other.\n\nThus, this code:\n\n```\nvar\n h1, h2: THouse;\nbegin\n h1 := THouse.Create('My house');\n h2 := THouse.Create('My other house somewhere');\n ^-----------------------^\n longer than 10 characters\n 0123456789 <-- 10 characters\n\n```\n\nMemory layout after first allocation:\n\n```\n\n h1\n v\n-----------------------[ttttNNNNNNNNNN]\n 5678My house\n\n```\n\nMemory layout after second allocation:\n\n```\n\n h2 h1\n v v\n---[ttttNNNNNNNNNN]----[ttttNNNNNNNNNN]\n 1234My other house somewhereouse\n ^---+--^\n |\n +- overwritten\n\n```\n\nThe part that will most often cause crash is when you overwrite important parts\nof the data you stored that really should not be randomly changed. For instance\nit might not be a problem that parts of the name of the h1-house was changed,\nin terms of crashing the program, but overwriting the overhead of the\nobject will most likely crash when you try to use the broken object,\nas will overwriting links that is stored to\nother objects in the object.\n\n---\n\n**Linked lists** \n\nWhen you follow an address on a piece of paper, you get to a house, and at that house there is another piece of paper with a new address on it, for the next house in the chain, and so on.\n\n```\nvar\n h1, h2: THouse;\nbegin\n h1 := THouse.Create('Home');\n h2 := THouse.Create('Cabin');\n h1.NextHouse := h2;\n\n```\n\nHere we create a link from our home house to our cabin. We can follow the chain until a house has no `NextHouse` reference, which means it's the last one. To visit all our houses, we could use the following code:\n\n```\nvar\n h1, h2: THouse;\n h: THouse;\nbegin\n h1 := THouse.Create('Home');\n h2 := THouse.Create('Cabin');\n h1.NextHouse := h2;\n ...\n h := h1;\n while h <> nil do\n begin\n h.LockAllDoors;\n h.CloseAllWindows;\n h := h.NextHouse;\n end;\n\n```\n\nMemory layout (added NextHouse as a link in the object, noted with\nthe four LLLL's in the below diagram):\n\n```\n\n h1 h2\n v v\n---[ttttNNNNNNNNNNLLLL]----[ttttNNNNNNNNNNLLLL]\n 1234Home + 5678Cabin +\n | ^ |\n +--------+ * (no link)\n\n```\n\n---\n\n**In basic terms, what is a memory address?**\n\nA memory address is in basic terms just a number. If you think of memory\nas a big array of bytes, the very first byte has the address 0, the next one\nthe address 1 and so on upwards. This is simplified, but good enough.\n\nSo this memory layout:\n\n```\n\n h1 h2\n v v\n---[ttttNNNNNNNNNN]---[ttttNNNNNNNNNN]\n 1234My house 5678My house\n\n```\n\nMight have these two address (the leftmost - is address 0):\n\n* h1 = 4\n* h2 = 23\n\nWhich means that our linked list above might actuall look like this:\n\n```\n\n h1 (=4) h2 (=28)\n v v\n---[ttttNNNNNNNNNNLLLL]----[ttttNNNNNNNNNNLLLL]\n 1234Home 0028 5678Cabin 0000\n | ^ |\n +--------+ * (no link)\n\n```\n\nIt is typical to store an address that \"points nowhere\" as a zero-address.\n\n---\n\n**In basic terms, what is a pointer?**\n\nA pointer is just a variable holding a memory address. You can typically ask the programming\nlanguage to give you its number, but most programming languages and runtimes tries to\nhide the fact that there is a number beneath, just because the number itself does not\nreally hold any meaning to you. It is best to think of a pointer as a black box, ie.\nyou don't really know or care about how it is actually implemented, just as long as it\nworks."}}},{"rowIdx":474,"cells":{"qid":{"kind":"number","value":5821,"string":"5,821"},"question":{"kind":"string","value":"
How do I escape the underscore character?
\n\n
I am writing something like the following where clause and want to be able to find actual entries with _d at the end.
\n\n
Where Username Like '%_d'\n
\n"},"answers":{"kind":"list like","value":[{"answer_id":5822,"author":"Lasse V. Karlsen","author_id":267,"author_profile":"https://Stackoverflow.com/users/267","pm_score":10,"selected":true,"text":"
You can use the wildcard pattern matching characters as literal characters. To use a wildcard character as a literal character, enclose the wildcard character in brackets. The following table shows several examples of using the LIKE keyword and the [ ] wildcard characters.
Obviously @Lasse solution is right, but there's another way to solve your problem: T-SQL operator LIKE defines the optional ESCAPE clause, that lets you declare a character which will escape the next character into the pattern.
\n\n
For your case, the following WHERE clauses are equivalent:
\n\n
WHERE username LIKE '%[_]d'; -- @Lasse solution\nWHERE username LIKE '%$_d' ESCAPE '$';\nWHERE username LIKE '%^_d' ESCAPE '^';\n
You can use the wildcard pattern matching characters as literal characters. To use a wildcard character as a literal character, enclose the wildcard character in brackets. The following table shows several examples of using the LIKE keyword and the [ ] wildcard characters.
Obviously @Lasse solution is right, but there's another way to solve your problem: T-SQL operator LIKE defines the optional ESCAPE clause, that lets you declare a character which will escape the next character into the pattern.
\\n\\n
For your case, the following WHERE clauses are equivalent:
\\n\\n
WHERE username LIKE '%[_]d'; -- @Lasse solution\\nWHERE username LIKE '%$_d' ESCAPE '$';\\nWHERE username LIKE '%^_d' ESCAPE '^';\\n
Adding to Gerardo Lima's answer, I was having problems when trying to use backslash as my escape character for the ESCAPE clause. This caused issues:
\\n
SELECT * FROM table WHERE email LIKE '%@%\\\\_%' ESCAPE '\\\\'\\n
\\n
It was resolved by switching to an exclamation point. This worked:
\\n
SELECT * FROM table WHERE email LIKE '%@%!_%' ESCAPE '!'\\n
\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/08"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/5821","https://Stackoverflow.com","https://Stackoverflow.com/users/383/"],"string":"[\n \"https://Stackoverflow.com/questions/5821\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/383/\"\n]"},"input":{"kind":"string","value":"How do I escape the underscore character?\n\nI am writing something like the following where clause and want to be able to find actual entries with \\_d at the end.\n\n```\nWhere Username Like '%_d'\n\n```"},"output":{"kind":"string","value":"[T-SQL Reference for LIKE](https://learn.microsoft.com/en-us/sql/t-sql/language-elements/like-transact-sql?view=sql-server-ver15#using-wildcard-characters-as-literals):\n\n> \n> You can use the wildcard pattern matching characters as literal characters. To use a wildcard character as a literal character, enclose the wildcard character in brackets. The following table shows several examples of using the LIKE keyword and the [ ] wildcard characters.\n> \n> \n> \n\nFor your case:\n\n```\n... LIKE '%[_]d'\n\n```"}}},{"rowIdx":475,"cells":{"qid":{"kind":"number","value":5846,"string":"5,846"},"question":{"kind":"string","value":"
How do I turn the following 2 queries into 1 query
\n\n
$sql = \"SELECT level FROM skills WHERE id = $id LIMIT 1;\";\n$result = $db->sql_query($sql);\n$level = (int) $db->sql_fetchfield('level');\n$db->sql_freeresult($result);\n\n++$level;\n\n$sql = \"UPDATE skills SET level = $level WHERE id = $id;\";\n$result = $db->sql_query($sql);\n$db->sql_freeresult($result);\n
\n\n
I'm using it in a phpBB mod but the gist is that I grab the level, add one to it then update, it seems that it'd be much easier and faster if I could do it as one query.
\n\n
Edit: $id has already been forced to be an integer, thus no escaping is needed this time.
$sql = \"UPDATE skills SET level = level+1 WHERE id = $id\";\n$result = $db->sql_query($sql);\n$db->sql_freeresult($result);\n
\n\n
In Teifion's specific case, the phpBB DDL lists that particular field as NOT NULL, so there's no danger of incrementing NULL.
\n\n
In the general case, you should not use NULL to represent zero. Incrementing NULL should give an answer of NULL. If you're the kind of misguided developer who thinks NULL=0, step away from keyboard and find another pastime, you're just making life hard for the rest of us. Of course, this is the computer industry and who are we to say you're wrong? If you're not wrong, use
\n\n
$sql = \"UPDATE skills SET level = COALESCE(level,0)+1 WHERE id = $id\";\n
\n\n
...but let's face it: you're wrong. If everyone starts at level 0, then your DDL should include
\n\n
level INT DEFAULT '0' NOT NULL\n
\n\n
in case the programmers forget to set it when they create a record. If not everyone starts on level 0, then skip the DEFAULT and force the programmer to supply a value on creation. If some people are beyond levels, for whom having a level is a meaningless thing, then adding one to their level equally has no meaning. In that case, drop the NOT NULL from the DDL.
UPDATE skills SET level = level + 1 WHERE id = $id\n
\n"},{"answer_id":5851,"author":"Lasse V. Karlsen","author_id":267,"author_profile":"https://Stackoverflow.com/users/267","pm_score":4,"selected":false,"text":"
This way:
\n\n
UPDATE skills\nSET level = level + 1\nWHERE id = $id\n
$sql = \\\"UPDATE skills SET level = level+1 WHERE id = $id\\\";\\n$result = $db->sql_query($sql);\\n$db->sql_freeresult($result);\\n
\\n\\n
In Teifion's specific case, the phpBB DDL lists that particular field as NOT NULL, so there's no danger of incrementing NULL.
\\n\\n
In the general case, you should not use NULL to represent zero. Incrementing NULL should give an answer of NULL. If you're the kind of misguided developer who thinks NULL=0, step away from keyboard and find another pastime, you're just making life hard for the rest of us. Of course, this is the computer industry and who are we to say you're wrong? If you're not wrong, use
\\n\\n
$sql = \\\"UPDATE skills SET level = COALESCE(level,0)+1 WHERE id = $id\\\";\\n
\\n\\n
...but let's face it: you're wrong. If everyone starts at level 0, then your DDL should include
\\n\\n
level INT DEFAULT '0' NOT NULL\\n
\\n\\n
in case the programmers forget to set it when they create a record. If not everyone starts on level 0, then skip the DEFAULT and force the programmer to supply a value on creation. If some people are beyond levels, for whom having a level is a meaningless thing, then adding one to their level equally has no meaning. In that case, drop the NOT NULL from the DDL.
$query = $db->prepare(\\\"UPDATE skills SET level = level + 1 WHERE id = :id\\\")\\n$query->bindValue(\\\":id\\\", $id);\\n$result = $query->execute();\\n
\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/08"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/5846","https://Stackoverflow.com","https://Stackoverflow.com/users/1384652/"],"string":"[\n \"https://Stackoverflow.com/questions/5846\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/1384652/\"\n]"},"input":{"kind":"string","value":"How do I turn the following 2 queries into 1 query\n\n```\n$sql = \"SELECT level FROM skills WHERE id = $id LIMIT 1;\";\n$result = $db->sql_query($sql);\n$level = (int) $db->sql_fetchfield('level');\n$db->sql_freeresult($result);\n\n++$level;\n\n$sql = \"UPDATE skills SET level = $level WHERE id = $id;\";\n$result = $db->sql_query($sql);\n$db->sql_freeresult($result);\n\n```\n\nI'm using it in a phpBB mod but the gist is that I grab the level, add one to it then update, it seems that it'd be much easier and faster if I could do it as one query.\n\nEdit: `$id` has already been forced to be an integer, thus no escaping is needed this time."},"output":{"kind":"string","value":"I get downmodded for this?\n\n```\n$sql = \"UPDATE skills SET level = level+1 WHERE id = $id\";\n$result = $db->sql_query($sql);\n$db->sql_freeresult($result);\n\n```\n\nIn Teifion's specific case, the phpBB DDL lists that particular field as NOT NULL, so there's no danger of incrementing NULL.\n\nIn the general case, you should not use NULL to represent zero. Incrementing NULL *should* give an answer of NULL. If you're the kind of misguided developer who thinks NULL=0, step away from keyboard and find another pastime, you're just making life hard for the rest of us. Of course, this is the computer industry and who are we to say you're wrong? If you're not wrong, use\n\n```\n$sql = \"UPDATE skills SET level = COALESCE(level,0)+1 WHERE id = $id\";\n\n```\n\n...but let's face it: you're wrong. If everyone starts at level 0, then your DDL should include\n\n```\nlevel INT DEFAULT '0' NOT NULL\n\n```\n\nin case the programmers forget to set it when they create a record. If not everyone starts on level 0, then skip the DEFAULT and force the programmer to supply a value on creation. If some people are beyond levels, for whom having a level is a meaningless thing, then adding one to their level equally has no meaning. In that case, drop the NOT NULL from the DDL."}}},{"rowIdx":476,"cells":{"qid":{"kind":"number","value":5857,"string":"5,857"},"question":{"kind":"string","value":"
I have a page upon which a user can choose up to many different paragraphs. When the link is clicked (or button), an email will open up and put all those paragraphs into the body of the email, address it, and fill in the subject. However, the text can be too long for a mailto link.
\n\n
Any way around this?
\n\n\n\n
We were thinking about having an SP from the SQL Server do it but the user needs a nice way of 'seeing' the email before they blast 50 executive level employees with items that shouldn't be sent...and of course there's the whole thing about doing IT for IT rather than doing software programming. 80(
\n\n
When you build stuff for IT, it doesn't (some say shouldn't) have to be pretty just functional. In other words, this isn't the dogfood we wake it's just the dog food we have to eat.
\n\n\n\n
We started talking about it and decided that the 'mail form' would give us exactly what we are looking for.
\n\n\n
A very different look to let the user know that the gun is loaded\nand aimed.
\n
The ability to change/add text to the email.
\n
Send a copy to themselves or not.
\n
Can be coded quickly.
\n\n"},"answers":{"kind":"list like","value":[{"answer_id":5860,"author":"Lasse V. Karlsen","author_id":267,"author_profile":"https://Stackoverflow.com/users/267","pm_score":5,"selected":true,"text":"
By putting the data into a form, I was able to make the body around 1800 characters long before the form stopped working.
Edit: The best way to send emails from a web application is of course to do just that, send it directly from the web application, instead of relying on the users mailprogram. As you've discovered, the protocol for sending information to that program is limited, but with a server-based solution you would of course not have those limitations.
Does the e-mail content need to be in the e-mail? Could you store the large content somewhere centrally (file-share/FTP site) then just send a link to the content?
\n\n
This makes the recipient have an extra step, but you have a consistent e-mail size, so won't run into reliability problems due to unexpectedly large or excessive content.
Edit: The best way to send emails from a web application is of course to do just that, send it directly from the web application, instead of relying on the users mailprogram. As you've discovered, the protocol for sending information to that program is limited, but with a server-based solution you would of course not have those limitations.
Does the e-mail content need to be in the e-mail? Could you store the large content somewhere centrally (file-share/FTP site) then just send a link to the content?
\\n\\n
This makes the recipient have an extra step, but you have a consistent e-mail size, so won't run into reliability problems due to unexpectedly large or excessive content.
\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/08"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/5857","https://Stackoverflow.com","https://Stackoverflow.com/users/730/"],"string":"[\n \"https://Stackoverflow.com/questions/5857\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/730/\"\n]"},"input":{"kind":"string","value":"I have a page upon which a user can choose up to many different paragraphs. When the link is clicked (or button), an email will open up and put all those paragraphs into the body of the email, address it, and fill in the subject. However, the text can be too long for a mailto link.\n\nAny way around this?\n\n---\n\nWe were thinking about having an SP from the SQL Server do it but the user needs a nice way of 'seeing' the email before they blast 50 executive level employees with items that shouldn't be sent...and of course there's the whole thing about doing IT for IT rather than doing software programming. 80(\n\nWhen you build stuff for IT, it doesn't (some say shouldn't) have to be pretty just functional. In other words, this isn't the dogfood we wake it's just the dog food we have to eat.\n\n---\n\nWe started talking about it and decided that the 'mail form' would give us exactly what we are looking for.\n\n1. A very different look to let the user know that the gun is loaded\nand aimed.\n2. The ability to change/add text to the email.\n3. Send a copy to themselves or not.\n4. Can be coded quickly."},"output":{"kind":"string","value":"By putting the data into a form, I was able to make the body around 1800 characters long before the form stopped working.\n\nThe code looked like this:\n\n```\n\n\n```\n\n---\n\n**Edit**: The best way to send emails from a web application is of course to do just that, send it directly from the web application, instead of relying on the users mailprogram. As you've discovered, the protocol for sending information to that program is limited, but with a server-based solution you would of course not have those limitations."}}},{"rowIdx":477,"cells":{"qid":{"kind":"number","value":5863,"string":"5,863"},"question":{"kind":"string","value":"
I'm just getting into creating some WCF services, but I have a requirement to make them backward compatible for legacy (.NET 1.1 and 2.0) client applications.
\n\n
I've managed to get the services to run correctly for 3.0 and greater clients, but when I publish the services using a basicHttpBinding endpoint (which I believe is required for the compatibility I need), the service refactors my method signatures. e.g.
\n\n
public bool MethodToReturnTrue(string seedValue);\n
\n\n
appears to the client apps as
\n\n
public void MethodToReturnTrue(string seedValue, out bool result, out bool MethodToReturnTrueResultSpecified);\n
\n\n
I've tried every configuration parameter I can think of in the app.config for my self-hosting console app, but I can't seem to make this function as expected. I suppose this might lead to the fact that my expectations are flawed, but I'd be surprised that a WCF service is incapable of handling a bool return type to a down-level client.
Ah, this is killing me! I did this at work about 3 months ago, and now I can't remember all the details.
\n\n
I do remember, however, that you need basicHttpBinding, and you can't use the new serializer (which is the default); you have to use the \"old\" XmlSerializer.
\n\n
Unfortunately, I don't work at the place where I did this anymore, so I can't go look at the code. I'll call my boss and see what I can dig up.
OK, we needed to resolve this issue in the short term, and so we came up with the idea of a \"interop\", or compatibility layer.
\n\n
Baiscally, all we did was added a traditional ASMX web service to the project, and called the WCF service from that using native WCF calls. We were then able to return the appropriate types back to the client applications without a significant amount of re-factoring work. I know it was a hacky solution, but it was the best option we had with such a large legacy code-base. And the added bonus is that it actually works surprisingly well. :)
You have to manually set the operation action name because the auto-generated WCF name is constructed differently from the ASMX action name (WCF includes the interface name as well, ASMX does not).
\n\n
Any data contracts you use should be decorated with [XmlType] rather than [DataContract].
Ah, this is killing me! I did this at work about 3 months ago, and now I can't remember all the details.
\\n\\n
I do remember, however, that you need basicHttpBinding, and you can't use the new serializer (which is the default); you have to use the \\\"old\\\" XmlSerializer.
\\n\\n
Unfortunately, I don't work at the place where I did this anymore, so I can't go look at the code. I'll call my boss and see what I can dig up.
OK, we needed to resolve this issue in the short term, and so we came up with the idea of a \\\"interop\\\", or compatibility layer.
\\n\\n
Baiscally, all we did was added a traditional ASMX web service to the project, and called the WCF service from that using native WCF calls. We were then able to return the appropriate types back to the client applications without a significant amount of re-factoring work. I know it was a hacky solution, but it was the best option we had with such a large legacy code-base. And the added bonus is that it actually works surprisingly well. :)
You have to manually set the operation action name because the auto-generated WCF name is constructed differently from the ASMX action name (WCF includes the interface name as well, ASMX does not).
\\n\\n
Any data contracts you use should be decorated with [XmlType] rather than [DataContract].
\\n\\n
Your config file should not need to change.
\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/08"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/5863","https://Stackoverflow.com","https://Stackoverflow.com/users/377/"],"string":"[\n \"https://Stackoverflow.com/questions/5863\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/377/\"\n]"},"input":{"kind":"string","value":"I'm just getting into creating some WCF services, but I have a requirement to make them backward compatible for legacy (.NET 1.1 and 2.0) client applications. \n\nI've managed to get the services to run correctly for 3.0 and greater clients, but when I publish the services using a basicHttpBinding endpoint (which I believe is required for the compatibility I need), the service refactors my method signatures. e.g.\n\n```\npublic bool MethodToReturnTrue(string seedValue);\n\n```\n\nappears to the client apps as\n\n```\npublic void MethodToReturnTrue(string seedValue, out bool result, out bool MethodToReturnTrueResultSpecified);\n\n```\n\nI've tried every configuration parameter I can think of in the app.config for my self-hosting console app, but I can't seem to make this function as expected. I suppose this might lead to the fact that my expectations are flawed, but I'd be surprised that a WCF service is incapable of handling a bool return type to a down-level client.\n\nMy current app.config looks like this.\n\n```\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n```\n\nCan anyone advise, please?"},"output":{"kind":"string","value":"OK, we needed to resolve this issue in the short term, and so we came up with the idea of a \"interop\", or compatibility layer.\n\nBaiscally, all we did was added a traditional ASMX web service to the project, and called the WCF service from that using native WCF calls. We were then able to return the appropriate types back to the client applications without a significant amount of re-factoring work. I know it was a hacky solution, but it was the best option we had with such a large legacy code-base. And the added bonus is that it actually works surprisingly well. :)"}}},{"rowIdx":478,"cells":{"qid":{"kind":"number","value":5876,"string":"5,876"},"question":{"kind":"string","value":"
I use dnsmasq to resolve DNS queries on my home network. Unfortunately, if a domain name is not known, it will append my own domain name to the request which means that I always end up viewing my own site in a browser.
\n\n
For example, if I enter http://dlksfhoiahdsfiuhsdf.com in a browser, I end up viewing my own site with that URL. If I try something like:
\n\n
host dlksfhoiahdsfiuhsdf.com\n
\n\n
Instead of the expected:
\n\n
Host dlksfhoiahdsfiuhsdf.com not found: 3(NXDOMAIN)\n
\n\n
I get this:
\n\n
dlksfhoiahdsfiuhsdf.com.mydomainname.com has address W.X.Y.Z.\n
\n\n
Clearly, dnsmasq is appending my domain name to impossible name requests in an effort to resolve them, but I'd rather see the not found error instead.
\n\n
I've tried playing with the expand-hosts and domain configuration settings, but to no avail. Is there anything else I can try?
I tried removing domain-needed from my own configuration to replicate your issue and it did not produce this behaviour. It's the only other parameter I could find that might be close to relevant.
\n
What does your hosts file look like? Maybe something weird is going on there that makes it think all weird domains are local to your network?
It is probably not dnsmasq doing it, but your local resolver library. If you use a unixish, try removing the \"search\" or \"domain\" lines from /etc/resolv.conf
There might be other causes, but the most obvious cause is the configuration of /etc/resolv.conf, and the fact that most DNS clients like to be very terse about errors.
\n\n
benc$ host thing.one\nHost thing.one not found: 3(NXDOMAIN)\n
I tried removing domain-needed from my own configuration to replicate your issue and it did not produce this behaviour. It's the only other parameter I could find that might be close to relevant.
\\n
What does your hosts file look like? Maybe something weird is going on there that makes it think all weird domains are local to your network?
It is probably not dnsmasq doing it, but your local resolver library. If you use a unixish, try removing the \\\"search\\\" or \\\"domain\\\" lines from /etc/resolv.conf
There might be other causes, but the most obvious cause is the configuration of /etc/resolv.conf, and the fact that most DNS clients like to be very terse about errors.
\\n\\n
benc$ host thing.one\\nHost thing.one not found: 3(NXDOMAIN)\\n
dnsmasq is forwarding the appended name out to the external dns server and its getting wildcarded.
\\n\\n
you can use --server=/yourinternaldomainhere/ to make sure that your internal domain name lookups are not forwarded out.
\\n\\n
syntax in this case would be:
\\n\\n
--server=/domain/iptoforwardto
\\n\\n
and in this case leave the iptoforwardto area blank as you don't want it to forward anywhere.
\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/08"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/5876","https://Stackoverflow.com","https://Stackoverflow.com/users/726/"],"string":"[\n \"https://Stackoverflow.com/questions/5876\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/726/\"\n]"},"input":{"kind":"string","value":"I use [dnsmasq](http://www.thekelleys.org.uk/dnsmasq/doc.html) to resolve DNS queries on my home network. Unfortunately, if a domain name is not known, it will append my own domain name to the request which means that I always end up viewing my own site in a browser.\n\nFor example, if I enter in a browser, I end up viewing my own site with that URL. If I try something like:\n\n```\nhost dlksfhoiahdsfiuhsdf.com\n\n```\n\nInstead of the expected:\n\n```\nHost dlksfhoiahdsfiuhsdf.com not found: 3(NXDOMAIN)\n\n```\n\nI get this:\n\n```\ndlksfhoiahdsfiuhsdf.com.mydomainname.com has address W.X.Y.Z.\n\n```\n\nClearly, dnsmasq is appending my domain name to impossible name requests in an effort to resolve them, but I'd rather see the not found error instead.\n\nI've tried playing with the *expand-hosts* and *domain* configuration settings, but to no avail. Is there anything else I can try?"},"output":{"kind":"string","value":"try querying with a trailing dot to explicitly set the root:\n\n```\nhost dlksfhoiahdsfiuhsdf.com.\n\n```"}}},{"rowIdx":479,"cells":{"qid":{"kind":"number","value":5909,"string":"5,909"},"question":{"kind":"string","value":"
I'm downloading an entire directory from a web server. It works OK, but I can't figure how to get the file size before download to compare if it was updated on the server or not. Can this be done as if I was downloading the file from a FTP server?
\n\n
import urllib\nimport re\n\nurl = \"http://www.someurl.com\"\n\n# Download the page locally\nf = urllib.urlopen(url)\nhtml = f.read()\nf.close()\n\nf = open (\"temp.htm\", \"w\")\nf.write (html)\nf.close()\n\n# List only the .TXT / .ZIP files\nfnames = re.findall('^.*<a href=\"(\\w+(?:\\.txt|.zip)?)\".*$', html, re.MULTILINE)\n\nfor fname in fnames:\n print fname, \"...\"\n\n f = urllib.urlopen(url + \"/\" + fname)\n\n #### Here I want to check the filesize to download or not #### \n file = f.read()\n f.close()\n\n f = open (fname, \"w\")\n f.write (file)\n f.close()\n
\n\n\n\n
@Jon: thank for your quick answer. It works, but the filesize on the web server is slightly less than the filesize of the downloaded file.
\n\n
Examples:
\n\n
Local Size Server Size\n 2.223.533 2.115.516\n 664.603 662.121\n
The size of the file is sent as the Content-Length header. Here is how to get it with urllib:
\n\n
>>> site = urllib.urlopen(\"http://python.org\")\n>>> meta = site.info()\n>>> print meta.getheaders(\"Content-Length\")\n['16535']\n>>>\n
Using the returned-urllib-object method info(), you can get various information on the retrieved document. Example of grabbing the current Google logo:
\n
>>> import urllib\n>>> d = urllib.urlopen("http://www.google.co.uk/logos/olympics08_opening.gif")\n>>> print d.info()\n\nContent-Type: image/gif\nLast-Modified: Thu, 07 Aug 2008 16:20:19 GMT \nExpires: Sun, 17 Jan 2038 19:14:07 GMT \nCache-Control: public \nDate: Fri, 08 Aug 2008 13:40:41 GMT \nServer: gws \nContent-Length: 20172 \nConnection: Close\n
\n
It's a dict, so to get the size of the file, you do urllibobject.info()['Content-Length']
\n
print f.info()['Content-Length']\n
\n
And to get the size of the local file (for comparison), you can use the os.stat() command:
import urllib, os\nlink = \"http://python.org\"\nprint \"opening url:\", link\nsite = urllib.urlopen(link)\nmeta = site.info()\nprint \"Content-Length:\", meta.getheaders(\"Content-Length\")[0]\n\nf = open(\"out.txt\", \"r\")\nprint \"File on disk:\",len(f.read())\nf.close()\n\n\nf = open(\"out.txt\", \"w\")\nf.write(site.read())\nsite.close()\nf.close()\n\nf = open(\"out.txt\", \"r\")\nprint \"File on disk after download:\",len(f.read())\nf.close()\n\nprint \"os.stat().st_size returns:\", os.stat(\"out.txt\").st_size\n
\n\n
Outputs this:
\n\n
opening url: http://python.org\nContent-Length: 16535\nFile on disk: 16535\nFile on disk after download: 16535\nos.stat().st_size returns: 16861\n
\n\n
What am I doing wrong here? Is os.stat().st_size not returning the correct size?
\n\n\n\n
Edit:\nOK, I figured out what the problem was:
\n\n
import urllib, os\nlink = \"http://python.org\"\nprint \"opening url:\", link\nsite = urllib.urlopen(link)\nmeta = site.info()\nprint \"Content-Length:\", meta.getheaders(\"Content-Length\")[0]\n\nf = open(\"out.txt\", \"rb\")\nprint \"File on disk:\",len(f.read())\nf.close()\n\n\nf = open(\"out.txt\", \"wb\")\nf.write(site.read())\nsite.close()\nf.close()\n\nf = open(\"out.txt\", \"rb\")\nprint \"File on disk after download:\",len(f.read())\nf.close()\n\nprint \"os.stat().st_size returns:\", os.stat(\"out.txt\").st_size\n
\n\n
this outputs:
\n\n
$ python test.py\nopening url: http://python.org\nContent-Length: 16535\nFile on disk: 16535\nFile on disk after download: 16535\nos.stat().st_size returns: 16535\n
\n\n
Make sure you are opening both files for binary read/write.
\n\n
// open for binary write\nopen(filename, \"wb\")\n// open for binary read\nopen(filename, \"rb\")\n
A requests-based solution using HEAD instead of GET (also prints HTTP headers):
\n\n
#!/usr/bin/python\n# display size of a remote file without downloading\n\nfrom __future__ import print_function\nimport sys\nimport requests\n\n# number of bytes in a megabyte\nMBFACTOR = float(1 << 20)\n\nresponse = requests.head(sys.argv[1], allow_redirects=True)\n\nprint(\"\\n\".join([('{:<40}: {}'.format(k, v)) for k, v in response.headers.items()]))\nsize = response.headers.get('content-length', 0)\nprint('{:<40}: {:.2f} MB'.format('FILE SIZE', int(size) / MBFACTOR))\n
For a python3 (tested on 3.5) approach I'd recommend:
\n\n
with urlopen(file_url) as in_file, open(local_file_address, 'wb') as out_file:\n print(in_file.getheader('Content-Length'))\n out_file.write(response.read())\n
@PabloG Regarding the local/server filesize difference
\n
Following is high-level illustrative explanation of why it may occur:
\n
The size on disk sometimes is different from the actual size of the data.\nIt depends on the underlying file-system and how it operates on data.\nAs you may have seen in Windows when formatting a flash drive you are asked to provide 'block/cluster size' and it varies [512b - 8kb].\nWhen a file is written on the disk, it is stored in a 'sort-of linked list' of disk blocks.\nWhen a certain block is used to store part of a file, no other file contents will be stored in the same blok, so even if the chunk is no occupuing the entire block space, the block is rendered unusable by other files.
\n
Example:\nWhen the filesystem is divided on 512b blocks, and we need to store 600b file, two blocks will be occupied. The first block will be fully utilized, while the second block will have only 88b utilized and the remaining (512-88)b will be unusable resulting in 'file-size-on-disk' being 1024b.\nThis is why Windows has different notations for 'file size' and 'size on disk'.
\n
NOTE:\nThere are different pros & cons that come with smaller/bigger FS block, so do a better research before playing with your filesystem.
\nFile_Name=requests.head(LINK).headers["X-File-Name"]\n\n#And other useful info** like the size of the file from this dict (headers)\n#like \n\nFile_size=requests.head(LINK).headers["Content-Length"]\n
The size of the file is sent as the Content-Length header. Here is how to get it with urllib:
\\n\\n
>>> site = urllib.urlopen(\\\"http://python.org\\\")\\n>>> meta = site.info()\\n>>> print meta.getheaders(\\\"Content-Length\\\")\\n['16535']\\n>>>\\n
Using the returned-urllib-object method info(), you can get various information on the retrieved document. Example of grabbing the current Google logo:
\\n
>>> import urllib\\n>>> d = urllib.urlopen("http://www.google.co.uk/logos/olympics08_opening.gif")\\n>>> print d.info()\\n\\nContent-Type: image/gif\\nLast-Modified: Thu, 07 Aug 2008 16:20:19 GMT \\nExpires: Sun, 17 Jan 2038 19:14:07 GMT \\nCache-Control: public \\nDate: Fri, 08 Aug 2008 13:40:41 GMT \\nServer: gws \\nContent-Length: 20172 \\nConnection: Close\\n
\\n
It's a dict, so to get the size of the file, you do urllibobject.info()['Content-Length']
\\n
print f.info()['Content-Length']\\n
\\n
And to get the size of the local file (for comparison), you can use the os.stat() command:
import urllib, os\\nlink = \\\"http://python.org\\\"\\nprint \\\"opening url:\\\", link\\nsite = urllib.urlopen(link)\\nmeta = site.info()\\nprint \\\"Content-Length:\\\", meta.getheaders(\\\"Content-Length\\\")[0]\\n\\nf = open(\\\"out.txt\\\", \\\"r\\\")\\nprint \\\"File on disk:\\\",len(f.read())\\nf.close()\\n\\n\\nf = open(\\\"out.txt\\\", \\\"w\\\")\\nf.write(site.read())\\nsite.close()\\nf.close()\\n\\nf = open(\\\"out.txt\\\", \\\"r\\\")\\nprint \\\"File on disk after download:\\\",len(f.read())\\nf.close()\\n\\nprint \\\"os.stat().st_size returns:\\\", os.stat(\\\"out.txt\\\").st_size\\n
\\n\\n
Outputs this:
\\n\\n
opening url: http://python.org\\nContent-Length: 16535\\nFile on disk: 16535\\nFile on disk after download: 16535\\nos.stat().st_size returns: 16861\\n
\\n\\n
What am I doing wrong here? Is os.stat().st_size not returning the correct size?
\\n\\n\\n\\n
Edit:\\nOK, I figured out what the problem was:
\\n\\n
import urllib, os\\nlink = \\\"http://python.org\\\"\\nprint \\\"opening url:\\\", link\\nsite = urllib.urlopen(link)\\nmeta = site.info()\\nprint \\\"Content-Length:\\\", meta.getheaders(\\\"Content-Length\\\")[0]\\n\\nf = open(\\\"out.txt\\\", \\\"rb\\\")\\nprint \\\"File on disk:\\\",len(f.read())\\nf.close()\\n\\n\\nf = open(\\\"out.txt\\\", \\\"wb\\\")\\nf.write(site.read())\\nsite.close()\\nf.close()\\n\\nf = open(\\\"out.txt\\\", \\\"rb\\\")\\nprint \\\"File on disk after download:\\\",len(f.read())\\nf.close()\\n\\nprint \\\"os.stat().st_size returns:\\\", os.stat(\\\"out.txt\\\").st_size\\n
\\n\\n
this outputs:
\\n\\n
$ python test.py\\nopening url: http://python.org\\nContent-Length: 16535\\nFile on disk: 16535\\nFile on disk after download: 16535\\nos.stat().st_size returns: 16535\\n
\\n\\n
Make sure you are opening both files for binary read/write.
\\n\\n
// open for binary write\\nopen(filename, \\\"wb\\\")\\n// open for binary read\\nopen(filename, \\\"rb\\\")\\n
A requests-based solution using HEAD instead of GET (also prints HTTP headers):
\\n\\n
#!/usr/bin/python\\n# display size of a remote file without downloading\\n\\nfrom __future__ import print_function\\nimport sys\\nimport requests\\n\\n# number of bytes in a megabyte\\nMBFACTOR = float(1 << 20)\\n\\nresponse = requests.head(sys.argv[1], allow_redirects=True)\\n\\nprint(\\\"\\\\n\\\".join([('{:<40}: {}'.format(k, v)) for k, v in response.headers.items()]))\\nsize = response.headers.get('content-length', 0)\\nprint('{:<40}: {:.2f} MB'.format('FILE SIZE', int(size) / MBFACTOR))\\n
For a python3 (tested on 3.5) approach I'd recommend:
\\n\\n
with urlopen(file_url) as in_file, open(local_file_address, 'wb') as out_file:\\n print(in_file.getheader('Content-Length'))\\n out_file.write(response.read())\\n
@PabloG Regarding the local/server filesize difference
\\n
Following is high-level illustrative explanation of why it may occur:
\\n
The size on disk sometimes is different from the actual size of the data.\\nIt depends on the underlying file-system and how it operates on data.\\nAs you may have seen in Windows when formatting a flash drive you are asked to provide 'block/cluster size' and it varies [512b - 8kb].\\nWhen a file is written on the disk, it is stored in a 'sort-of linked list' of disk blocks.\\nWhen a certain block is used to store part of a file, no other file contents will be stored in the same blok, so even if the chunk is no occupuing the entire block space, the block is rendered unusable by other files.
\\n
Example:\\nWhen the filesystem is divided on 512b blocks, and we need to store 600b file, two blocks will be occupied. The first block will be fully utilized, while the second block will have only 88b utilized and the remaining (512-88)b will be unusable resulting in 'file-size-on-disk' being 1024b.\\nThis is why Windows has different notations for 'file size' and 'size on disk'.
\\n
NOTE:\\nThere are different pros & cons that come with smaller/bigger FS block, so do a better research before playing with your filesystem.
\\nFile_Name=requests.head(LINK).headers["X-File-Name"]\\n\\n#And other useful info** like the size of the file from this dict (headers)\\n#like \\n\\nFile_size=requests.head(LINK).headers["Content-Length"]\\n
\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/08"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/5909","https://Stackoverflow.com","https://Stackoverflow.com/users/394/"],"string":"[\n \"https://Stackoverflow.com/questions/5909\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/394/\"\n]"},"input":{"kind":"string","value":"I'm downloading an entire directory from a web server. It works OK, but I can't figure how to get the file size before download to compare if it was updated on the server or not. Can this be done as if I was downloading the file from a FTP server?\n\n```\nimport urllib\nimport re\n\nurl = \"http://www.someurl.com\"\n\n# Download the page locally\nf = urllib.urlopen(url)\nhtml = f.read()\nf.close()\n\nf = open (\"temp.htm\", \"w\")\nf.write (html)\nf.close()\n\n# List only the .TXT / .ZIP files\nfnames = re.findall('^.*This gets the value of whatever is selected in my dropdown menu.\n\n
\n"},{"answer_id":46924743,"author":"Muhammad Tariq Ahmed Khan","author_id":8829462,"author_profile":"https://Stackoverflow.com/users/8829462","pm_score":2,"selected":false,"text":"
function getValue(obj)\n{ \n // it will return the selected text\n // obj variable will contain the object of check box\n var text = obj.options[obj.selectedIndex].innerHTML ; \n\n}\n
Found this a tricky question but using ideas from here I eventually got the solution using PHP & Mysqli to populate the list : and then a bit of javascript to get the working variable out.
<script language="JavaScript">\n \n function changemfr()\n {\n var $mfr2=document.getElementById("mfrbtn").selectedOptions[0].text;\n alert($mfr2);\n }\n</script>\n
function getValue(obj)\\n{ \\n // it will return the selected text\\n // obj variable will contain the object of check box\\n var text = obj.options[obj.selectedIndex].innerHTML ; \\n\\n}\\n
Found this a tricky question but using ideas from here I eventually got the solution using PHP & Mysqli to populate the list : and then a bit of javascript to get the working variable out.
<script language="JavaScript">\\n \\n function changemfr()\\n {\\n var $mfr2=document.getElementById("mfrbtn").selectedOptions[0].text;\\n alert($mfr2);\\n }\\n</script>\\n
\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/08"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/5913","https://Stackoverflow.com","https://Stackoverflow.com/users/1384652/"],"string":"[\n \"https://Stackoverflow.com/questions/5913\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/1384652/\"\n]"},"input":{"kind":"string","value":"This gets the value of whatever is selected in my dropdown menu.\n\n```\ndocument.getElementById('newSkill').value\n\n```\n\nI cannot however find out what property to go after for the text that's currently displayed by the drop down menu. I tried \"text\" then looked at [W3Schools](http://w3schools.com/) but that didn't have the answer, does anybody here know?\n\nFor those not sure, here's the HTML for a drop down box.\n\n```\n\n\n```"},"output":{"kind":"string","value":"Based on your example HTML code, here's one way to get the displayed text of the currently selected option:\n\n```\nvar skillsSelect = document.getElementById(\"newSkill\");\nvar selectedText = skillsSelect.options[skillsSelect.selectedIndex].text;\n\n```"}}},{"rowIdx":481,"cells":{"qid":{"kind":"number","value":5916,"string":"5,916"},"question":{"kind":"string","value":"
For those of you in the Visual Studio environment, how do you feel about wrapping any of your code in #regions? (or if any other IDE has something similar...)
I use Textmate (Mac only) which has Code folding and I find it really useful for folding functions, I know what my \"getGet\" function does, I don't need it taking up 10 lines of oh so valuable screen space.
\n\n
I never use it to hide a for loop, if statement or similar unless showing the code to someone else where I will hide code they have seen to avoid showing the same code twice.
While I understand the problem that Jeff, et. al. have with regions, what I don't understand is why hitting CTRL+M,CTRL+L to expand all regions in a file is so difficult to deal with.
I prefer #regions myself, but an old coworker couldn't stand to have things hidden. I understood his point once I worked on a page with 7 #regions, at least 3 of which had been auto-generated and had the same name, but in general I think they're a useful way of splitting things up and keeping everything less cluttered.
Extensive use of regions by others also give me the impression that someone, somewhere, is violating the Single Responsibility Principle and is trying to do too many things with one object.
I'm not a fan of partial classes - I try to develop my classes such that each class has a very clear, single issue for which it's responsible. To that end, I don't believe that something with a clear responsibility should be split across multiple files. That's why I don't like partial classes.
\n\n
With that said, I'm on the fence about regions. For the most part, I don't use them; however, I work with code every day that includes regions - some people go really heavy on them (folding up private methods into a region and then each method folded into its own region), and some people go light on them (folding up enums, folding up attributes, etc). My general rule of thumb, as of now, is that I only put code in regions if (a) the data is likely to remain static or will not be touched very often (like enums), or (b) if there are methods that are implemented out of necessity because of subclassing or abstract method implementation, but, again, won't be touched very often.
Sometimes you might find yourself working on a team where #regions are encouraged or required. If you're like me and you can't stand messing around with folded code you can turn off outlining for C#:
I use #Region to hide ugly and useless automatically generated code, which really belongs in the automatically generated part of the partial class. But, when working with old projects or upgraded projects, you don't always have that luxury.
\n\n
As for other types of folding, I fold Functions all the time. If you name the function well, you will never have to look inside unless you're testing something or (re-)writing it.
Partial classes are provided so that you can separate tool auto-generated code from any customisations you may need to make after the code gen has done its bit. This means your code stays intact after you re-run the codegen and doesn't get overwritten. This is a good thing.
I really don't have a problem with using #region to organize code. Personally, I'll usually setup different regions for things like properties, event handlers, and public/private methods.
Eclipse does some of this in Java (or PHP with plugins) on its own. Allows you to fold functions and such. I tend to like it. If I know what a function does and I am not working on it, I dont need to look at it.
9 out of 10 times, code folding means that you have failed to use the SoC principle for what its worth. \nI more or less feel the same thing about partial classes. If you have a piece of code you think is too big you need to chop it up in manageable (and reusable) parts, not hide or split it up. It will bite you the next time someone needs to change it, and cannot see the logic hidden in a 250 line monster of a method. \n \nWhenever you can, pull some code out of the main class, and into a helper or factory class.\n
\n\n
foreach (var item in Items)\n{\n //.. 100 lines of validation and data logic..\n}\n
\n\n
is not as readable as
\n\n
foreach (var item in Items)\n{\n if (ValidatorClass.Validate(item))\n RepositoryClass.Update(item);\n}\n
Regions must never be used inside methods. They may be used to group methods but this must be handled with extreme caution so that the reader of the code does not go insane. There is no point in folding methods by their modifiers. But sometimes folding may increase readability. For e.g. grouping some methods that you use for working around some issues when using an external library and you won't want to visit too often may be helpful. But the coder must always seek for solutions like wrapping the library with appropriate classes in this particular example. When all else fails, use folding for improving readibility.
I think that it's a useful tool, when used properly. In many cases, I feel that methods and enumerations and other things that are often folded should be little black boxes. Unless you must look at them for some reason, their contents don't matter and should be as hidden as possible. However, I never fold private methods, comments, or inner classes. Methods and enums are really the only things I fold.
My approach is similar to a few others here, using regions to organize code blocks into constructors, properties, events, etc.
\n\n
There's an excellent set of VS.NET macros by Roland Weigelt available from his blog entry, Better Keyboard Support for #region ... #endregion. I've been using these for years, mapping ctrl+. to collapse the current region and ctrl++ to expand it. Find that it works a lot better that the default VS.NET functionality which folds/unfolds everything.
The Coding Horror article actual got me thinking about this as well.
\n\n
Generally, I large classes I will put a region around the member variables, constants, and properties to reduce the amount of text I have to scroll through and leave everything else outside of a region. On forms I will generally group things into \"member variables, constants, and properties\", form functions, and event handlers. Once again, this is more so I don't have to scroll through a lot of text when I just want to review some event handlers.
Emacs has a folding minor mode, but I only fire it up occasionally. Mostly when I'm working on some monstrosity inherited from another physicist who evidently had less instruction or took less care about his/her coding practices.
This is just one of those silly discussions that lead to nowhere. If you like regions, use them. If you don't, configure your editor to turn them off. There, everybody is happy.
Using regions (or otherwise folding code) should have nothing to do with code smells (or hiding them) or any other idea of hiding code you don't want people to \"easily\" see.
\n\n
Regions and code folding is really all about providing a way to easily group sections of code that can be collapsed/folded/hidden to minimize the amount of extraneous \"noise\" around what you are currently working on. If you set things up correctly (meaning actually name your regions something useful, like the name of the method contained) then you can collapse everything except for the function you are currently editing and still maintain some level of context without having to actually see the other code lines.
\n\n
There probably should be some best practice type guidelines around these ideas, but I use regions extensively to provide a standard structure to my code files (I group events, class-wide fields, private properties/methods, public properties/methods). Each method or property also has a region, where the region name is the method/property name. If I have a bunch of overloaded methods, the region name is the full signature and then that entire group is wrapped in a region that is just the function name.
Region folding would be fine if I didn't have to manually maintain region groupings based on features of my code that are intrinsic to the language. For example, the compiler already knows it's a constructor. The IDE's code model already knows it's a constructor. But if I want to see a view of the code where the constructors are grouped together, for some reason I have to restate the fact that these things are constructors, by physically placing them together and then putting a group around them. The same goes for any other way of slicing up a class/struct/interface. What if I change my mind and want to see the public/protected/private stuff separated out into groups first, and then grouped by member kind?
\n
Using regions to mark out public properties (for example) is as bad as entering a redundant comment that adds nothing to what is already discernible from the code itself.
\n
Anyway, to avoid having to use regions for that purpose, I wrote a free, open source Visual Studio 2008 IDE add-in called Ora. It provides a grouped view automatically, making it far less necessary to maintain physical grouping or to use regions. You may find it useful.
I generally find that when dealing with code like Events in C# where there's about 10 lines of code that are actually just part of an event declaration (the EventArgs class the delegate declaration and the event declaration) Putting a region around them and then folding them out of the way makes it a little more readable.
I personally hate regions. The only code that should be in regions in my opinion is generated code.\nWhen I open file I always start with Ctrl+M+O. This folds to method level. When you have regions you see nothing but region names.
\n
Before checking in I group methods/fields logically so that it looks ok after Ctrl+M+O.\nIf you need regions you have to much lines in your class. I also find that this is very common.
\n
region ThisLooksLikeWellOrganizedCodeBecauseIUseRegions
I use Textmate (Mac only) which has Code folding and I find it really useful for folding functions, I know what my \\\"getGet\\\" function does, I don't need it taking up 10 lines of oh so valuable screen space.
\\n\\n
I never use it to hide a for loop, if statement or similar unless showing the code to someone else where I will hide code they have seen to avoid showing the same code twice.
While I understand the problem that Jeff, et. al. have with regions, what I don't understand is why hitting CTRL+M,CTRL+L to expand all regions in a file is so difficult to deal with.
I prefer #regions myself, but an old coworker couldn't stand to have things hidden. I understood his point once I worked on a page with 7 #regions, at least 3 of which had been auto-generated and had the same name, but in general I think they're a useful way of splitting things up and keeping everything less cluttered.
Extensive use of regions by others also give me the impression that someone, somewhere, is violating the Single Responsibility Principle and is trying to do too many things with one object.
I'm not a fan of partial classes - I try to develop my classes such that each class has a very clear, single issue for which it's responsible. To that end, I don't believe that something with a clear responsibility should be split across multiple files. That's why I don't like partial classes.
\\n\\n
With that said, I'm on the fence about regions. For the most part, I don't use them; however, I work with code every day that includes regions - some people go really heavy on them (folding up private methods into a region and then each method folded into its own region), and some people go light on them (folding up enums, folding up attributes, etc). My general rule of thumb, as of now, is that I only put code in regions if (a) the data is likely to remain static or will not be touched very often (like enums), or (b) if there are methods that are implemented out of necessity because of subclassing or abstract method implementation, but, again, won't be touched very often.
Sometimes you might find yourself working on a team where #regions are encouraged or required. If you're like me and you can't stand messing around with folded code you can turn off outlining for C#:
\\n\\n\\n
Options -> Text Editor -> C# -> Advanced Tab
\\n
Uncheck \\\"Enter outlining mode when files open\\\"
I use #Region to hide ugly and useless automatically generated code, which really belongs in the automatically generated part of the partial class. But, when working with old projects or upgraded projects, you don't always have that luxury.
\\n\\n
As for other types of folding, I fold Functions all the time. If you name the function well, you will never have to look inside unless you're testing something or (re-)writing it.
Partial classes are provided so that you can separate tool auto-generated code from any customisations you may need to make after the code gen has done its bit. This means your code stays intact after you re-run the codegen and doesn't get overwritten. This is a good thing.
I really don't have a problem with using #region to organize code. Personally, I'll usually setup different regions for things like properties, event handlers, and public/private methods.
Eclipse does some of this in Java (or PHP with plugins) on its own. Allows you to fold functions and such. I tend to like it. If I know what a function does and I am not working on it, I dont need to look at it.
9 out of 10 times, code folding means that you have failed to use the SoC principle for what its worth. \\nI more or less feel the same thing about partial classes. If you have a piece of code you think is too big you need to chop it up in manageable (and reusable) parts, not hide or split it up. It will bite you the next time someone needs to change it, and cannot see the logic hidden in a 250 line monster of a method. \\n \\nWhenever you can, pull some code out of the main class, and into a helper or factory class.\\n
\\n\\n
foreach (var item in Items)\\n{\\n //.. 100 lines of validation and data logic..\\n}\\n
\\n\\n
is not as readable as
\\n\\n
foreach (var item in Items)\\n{\\n if (ValidatorClass.Validate(item))\\n RepositoryClass.Update(item);\\n}\\n
Regions must never be used inside methods. They may be used to group methods but this must be handled with extreme caution so that the reader of the code does not go insane. There is no point in folding methods by their modifiers. But sometimes folding may increase readability. For e.g. grouping some methods that you use for working around some issues when using an external library and you won't want to visit too often may be helpful. But the coder must always seek for solutions like wrapping the library with appropriate classes in this particular example. When all else fails, use folding for improving readibility.
I think that it's a useful tool, when used properly. In many cases, I feel that methods and enumerations and other things that are often folded should be little black boxes. Unless you must look at them for some reason, their contents don't matter and should be as hidden as possible. However, I never fold private methods, comments, or inner classes. Methods and enums are really the only things I fold.
My approach is similar to a few others here, using regions to organize code blocks into constructors, properties, events, etc.
\\n\\n
There's an excellent set of VS.NET macros by Roland Weigelt available from his blog entry, Better Keyboard Support for #region ... #endregion. I've been using these for years, mapping ctrl+. to collapse the current region and ctrl++ to expand it. Find that it works a lot better that the default VS.NET functionality which folds/unfolds everything.
The Coding Horror article actual got me thinking about this as well.
\\n\\n
Generally, I large classes I will put a region around the member variables, constants, and properties to reduce the amount of text I have to scroll through and leave everything else outside of a region. On forms I will generally group things into \\\"member variables, constants, and properties\\\", form functions, and event handlers. Once again, this is more so I don't have to scroll through a lot of text when I just want to review some event handlers.
Emacs has a folding minor mode, but I only fire it up occasionally. Mostly when I'm working on some monstrosity inherited from another physicist who evidently had less instruction or took less care about his/her coding practices.
This is just one of those silly discussions that lead to nowhere. If you like regions, use them. If you don't, configure your editor to turn them off. There, everybody is happy.
Using regions (or otherwise folding code) should have nothing to do with code smells (or hiding them) or any other idea of hiding code you don't want people to \\\"easily\\\" see.
\\n\\n
Regions and code folding is really all about providing a way to easily group sections of code that can be collapsed/folded/hidden to minimize the amount of extraneous \\\"noise\\\" around what you are currently working on. If you set things up correctly (meaning actually name your regions something useful, like the name of the method contained) then you can collapse everything except for the function you are currently editing and still maintain some level of context without having to actually see the other code lines.
\\n\\n
There probably should be some best practice type guidelines around these ideas, but I use regions extensively to provide a standard structure to my code files (I group events, class-wide fields, private properties/methods, public properties/methods). Each method or property also has a region, where the region name is the method/property name. If I have a bunch of overloaded methods, the region name is the full signature and then that entire group is wrapped in a region that is just the function name.
Region folding would be fine if I didn't have to manually maintain region groupings based on features of my code that are intrinsic to the language. For example, the compiler already knows it's a constructor. The IDE's code model already knows it's a constructor. But if I want to see a view of the code where the constructors are grouped together, for some reason I have to restate the fact that these things are constructors, by physically placing them together and then putting a group around them. The same goes for any other way of slicing up a class/struct/interface. What if I change my mind and want to see the public/protected/private stuff separated out into groups first, and then grouped by member kind?
\\n
Using regions to mark out public properties (for example) is as bad as entering a redundant comment that adds nothing to what is already discernible from the code itself.
\\n
Anyway, to avoid having to use regions for that purpose, I wrote a free, open source Visual Studio 2008 IDE add-in called Ora. It provides a grouped view automatically, making it far less necessary to maintain physical grouping or to use regions. You may find it useful.
I generally find that when dealing with code like Events in C# where there's about 10 lines of code that are actually just part of an event declaration (the EventArgs class the delegate declaration and the event declaration) Putting a region around them and then folding them out of the way makes it a little more readable.
I personally hate regions. The only code that should be in regions in my opinion is generated code.\\nWhen I open file I always start with Ctrl+M+O. This folds to method level. When you have regions you see nothing but region names.
\\n
Before checking in I group methods/fields logically so that it looks ok after Ctrl+M+O.\\nIf you need regions you have to much lines in your class. I also find that this is very common.
\\n
region ThisLooksLikeWellOrganizedCodeBecauseIUseRegions
That's all I use regions for. I had no idea you could use them inside of methods.
\\n\\n
Sounds like a terrible idea :)
\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/08"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/5916","https://Stackoverflow.com","https://Stackoverflow.com/users/396/"],"string":"[\n \"https://Stackoverflow.com/questions/5916\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/396/\"\n]"},"input":{"kind":"string","value":"For those of you in the Visual Studio environment, how do you feel about wrapping any of your code in #regions? (or if any other IDE has something similar...)"},"output":{"kind":"string","value":"9 out of 10 times, code folding means that you have failed to use the [SoC principle](http://en.wikipedia.org/wiki/Separation_of_concerns) for what its worth. \n\nI more or less feel the same thing about partial classes. If you have a piece of code you think is too big you need to chop it up in manageable (and reusable) parts, not hide or split it up. \nIt will bite you the next time someone needs to change it, and cannot see the logic hidden in a 250 line monster of a method. \n\nWhenever you can, pull some code out of the main class, and into a helper or factory class.\n\n```\nforeach (var item in Items)\n{\n //.. 100 lines of validation and data logic..\n}\n\n```\n\nis not as readable as\n\n```\nforeach (var item in Items)\n{\n if (ValidatorClass.Validate(item))\n RepositoryClass.Update(item);\n}\n\n```\n\nMy $0.02 anyways."}}},{"rowIdx":482,"cells":{"qid":{"kind":"number","value":5949,"string":"5,949"},"question":{"kind":"string","value":"
I've always preferred to use long integers as primary keys in databases, for simplicity and (assumed) speed. But when using a REST or Rails-like URL scheme for object instances, I'd then end up with URLs like this:
\n\n
http://example.com/user/783\n
\n\n
And then the assumption is that there are also users with IDs of 782, 781, ..., 2, and 1. Assuming that the web app in question is secure enough to prevent people entering other numbers to view other users without authorization, a simple sequentially-assigned surrogate key also \"leaks\" the total number of instances (older than this one), in this case users, which might be privileged information. (For instance, I am user #726 in stackoverflow.)
\n\n
Would a UUID/GUID be a better solution? Then I could set up URLs like this:
Not exactly succinct, but there's less implied information about users on display. Sure, it smacks of \"security through obscurity\" which is no substitute for proper security, but it seems at least a little more secure.
\n\n
Is that benefit worth the cost and complexity of implementing UUIDs for web-addressable object instances? I think that I'd still want to use integer columns as database PKs just to speed up joins.
\n\n
There's also the question of in-database representation of UUIDs. I know MySQL stores them as 36-character strings. Postgres seems to have a more efficient internal representation (128 bits?) but I haven't tried it myself. Anyone have any experience with this?
\n\n\n\n
Update: for those who asked about just using the user name in the URL (e.g., http://example.com/user/yukondude), that works fine for object instances with names that are unique, but what about the zillions of web app objects that can really only be identified by number? Orders, transactions, invoices, duplicate image names, stackoverflow questions, ...
I work with a student management system which uses UUID's in the form of an integer. They have a table which hold the next unique ID.
\n\n
Although this is probably a good idea for an architectural point of view, it makes working with on a daily basis difficult. Sometimes there is a need to do bulk inserts and having a UUID makes this very difficult, usually requiring writing a cursor instead of a simple SELECT INTO statement.
I can't say about the web side of your question. But uuids are great for n-tier applications. PK generation can be decentralized: each client generates it's own pk without risk of collision. \nAnd the speed difference is generally small.
\n\n
Make sure your database supports an efficient storage datatype (16 bytes, 128 bits).\nAt the very least you can encode the uuid string in base64 and use char(22).
\n\n
I've used them extensively with Firebird and do recommend.
I can answer you that in SQL server if you use a uniqueidentifier (GUID) datatype and use the NEWID() function to create values you will get horrible fragmentation because of page splits. The reason is that when using NEWID() the value generated is not sequential. SQL 2005 added the NEWSEQUANTIAL() function to remedy that
\n\n
One way to still use GUID and int is to have a guid and an int in a table so that the guid maps to the int. the guid is used externally but the int internally in the DB
I don't think a GUID gives you many benefits. Users hate long, incomprehensible URLs.
\n\n
Create a shorter ID that you can map to the URL, or enforce a unique user name convention (http://example.com/user/brianly). The guys at 37Signals would probably mock you for worrying about something like this when it comes to a web app.
\n\n
Incidentally you can force your database to start creating integer IDs from a base value.
For what it's worth, I've seen a long running stored procedure (9+ seconds) drop to just a few hundred milliseconds of run time simply by switching from GUID primary keys to integers. That's not to say displaying a GUID is a bad idea, but as others have pointed out, joining on them, and indexing them, by definition, is not going to be anywhere near as fast as with integers.
You could use an integer which is related to the row number but is not sequential. For example, you could take the 32 bits of the sequential ID and rearrange them with a fixed scheme (for example, bit 1 becomes bit 6, bit 2 becomes bit 15, etc..). \nThis will be a bidirectional encryption, and you will be sure that two different IDs will always have different encryptions. \nIt would obviously be easy to decode, if one takes the time to generate enough IDs and get the schema, but, if I understand correctly your problem, you just want to not give away information too easily.
I think that this is one of these issues that cause quasi-religious debates, and its almost futile to talk about. I would just say use what you prefer. In 99% of systems it will no matter which type of key you use, so the benefits (stated in the other posts) of using one sort over the other will never be an issue.
We use GUIDs as primary keys for all our tables as it doubles as the RowGUID for MS SQL Server Replication. Makes it very easy when the client suddenly opens an office in another part of the world...
It also depends on what you care about for your application. For n-tier apps GUIDs/UUIDs are simpler to implement and are easier to port between different databases. To produce Integer keys some database support a sequence object natively and some require custom construction of a sequence table.
\n\n
Integer keys probably (I don't have numbers) provide an advantage for query and indexing performance as well as space usage. Direct DB querying is also much easier using numeric keys, less copy/paste as they are easier to remember.
Why not have your URI key be human readable (or unguessable, depending on your needs), and your primary index integer based, that way you get the best of both worlds. A lot of blog software does that, where the exposed id of the entry is identified by a 'slug', and the numeric id is hidden away inside of the system.
\n\n
The added benefit here is that you now have a really nice URL structure, which is good for SEO. Obviously for a transaction this is not a good thing, but for something like stackoverflow, it is important (see URL up top...). Getting uniqueness isn't that difficult. If you are really concerned, store a hash of the slug inside a table somewhere, and do a lookup before insertion.
\n\n
edit: Stackoverflow doesn't quite use the system I describe, see Guy's comment below.
As long as you use a DB system with efficient storage, HDD is cheap these days anyway...
\n\n
I know GUID's can be a b*tch to work with some times and come with some query overhead however from a security perspective they are a savior.
\n\n
Thinking security by obscurity they fit well when forming obscure URI's and building normalised DB's with Table, Record and Column defined security you cant go wrong with GUID's, try doing that with integer based id's.
My opinion is that it is preferable to use integers and have short, comprehensible URLs.
\n
As a developer, it feels a little bit awful seeing sequential integers and knowing that some information about total record count is leaking out, but honestly - most people probably don't care, and that information has never really been critical to my businesses.
\n
Having long ugly UUID URLs seems to me like much more of a turn off to normal users.
YouTube uses 11 characters with base64 encoding which offers 11^64 possibilities, and they are usually pretty manageable to write. I wonder if that would offer better performance than a full on UUID. UUID converted to base 64 would be double the size I believe.
Depending on app you may care or not care about url. If you don't care, just use uuid as is, it's fine.
\n
If you care, then you will need to decide on url format.
\n
Best case scenario is a use of unique slug if you ok with never changing it:
\n
http://example.com/sale/super-duper-phone\n
\n
If your url is generated from title and you want to change slug on title change there is a few options. Use it as is and query by uuid (slug is just decoration):
If you don't want uuid or short id in url and want only slug, but do care about seo and user bookmarks, you will need to redirect all request from
\n
http://example.com/sale/phone-1-title\n
\n
to
\n
http://example.com/sale/phone-1-title-updated\n
\n
this will add additional complexity of managing slug history, adding fallback to history for all queries where slug is used and redirects if slugs doesn't match
I work with a student management system which uses UUID's in the form of an integer. They have a table which hold the next unique ID.
\\n\\n
Although this is probably a good idea for an architectural point of view, it makes working with on a daily basis difficult. Sometimes there is a need to do bulk inserts and having a UUID makes this very difficult, usually requiring writing a cursor instead of a simple SELECT INTO statement.
I can't say about the web side of your question. But uuids are great for n-tier applications. PK generation can be decentralized: each client generates it's own pk without risk of collision. \\nAnd the speed difference is generally small.
\\n\\n
Make sure your database supports an efficient storage datatype (16 bytes, 128 bits).\\nAt the very least you can encode the uuid string in base64 and use char(22).
\\n\\n
I've used them extensively with Firebird and do recommend.
I can answer you that in SQL server if you use a uniqueidentifier (GUID) datatype and use the NEWID() function to create values you will get horrible fragmentation because of page splits. The reason is that when using NEWID() the value generated is not sequential. SQL 2005 added the NEWSEQUANTIAL() function to remedy that
\\n\\n
One way to still use GUID and int is to have a guid and an int in a table so that the guid maps to the int. the guid is used externally but the int internally in the DB
I don't think a GUID gives you many benefits. Users hate long, incomprehensible URLs.
\\n\\n
Create a shorter ID that you can map to the URL, or enforce a unique user name convention (http://example.com/user/brianly). The guys at 37Signals would probably mock you for worrying about something like this when it comes to a web app.
\\n\\n
Incidentally you can force your database to start creating integer IDs from a base value.
For what it's worth, I've seen a long running stored procedure (9+ seconds) drop to just a few hundred milliseconds of run time simply by switching from GUID primary keys to integers. That's not to say displaying a GUID is a bad idea, but as others have pointed out, joining on them, and indexing them, by definition, is not going to be anywhere near as fast as with integers.
You could use an integer which is related to the row number but is not sequential. For example, you could take the 32 bits of the sequential ID and rearrange them with a fixed scheme (for example, bit 1 becomes bit 6, bit 2 becomes bit 15, etc..). \\nThis will be a bidirectional encryption, and you will be sure that two different IDs will always have different encryptions. \\nIt would obviously be easy to decode, if one takes the time to generate enough IDs and get the schema, but, if I understand correctly your problem, you just want to not give away information too easily.
I think that this is one of these issues that cause quasi-religious debates, and its almost futile to talk about. I would just say use what you prefer. In 99% of systems it will no matter which type of key you use, so the benefits (stated in the other posts) of using one sort over the other will never be an issue.
We use GUIDs as primary keys for all our tables as it doubles as the RowGUID for MS SQL Server Replication. Makes it very easy when the client suddenly opens an office in another part of the world...
It also depends on what you care about for your application. For n-tier apps GUIDs/UUIDs are simpler to implement and are easier to port between different databases. To produce Integer keys some database support a sequence object natively and some require custom construction of a sequence table.
\\n\\n
Integer keys probably (I don't have numbers) provide an advantage for query and indexing performance as well as space usage. Direct DB querying is also much easier using numeric keys, less copy/paste as they are easier to remember.
Why not have your URI key be human readable (or unguessable, depending on your needs), and your primary index integer based, that way you get the best of both worlds. A lot of blog software does that, where the exposed id of the entry is identified by a 'slug', and the numeric id is hidden away inside of the system.
\\n\\n
The added benefit here is that you now have a really nice URL structure, which is good for SEO. Obviously for a transaction this is not a good thing, but for something like stackoverflow, it is important (see URL up top...). Getting uniqueness isn't that difficult. If you are really concerned, store a hash of the slug inside a table somewhere, and do a lookup before insertion.
\\n\\n
edit: Stackoverflow doesn't quite use the system I describe, see Guy's comment below.
As long as you use a DB system with efficient storage, HDD is cheap these days anyway...
\\n\\n
I know GUID's can be a b*tch to work with some times and come with some query overhead however from a security perspective they are a savior.
\\n\\n
Thinking security by obscurity they fit well when forming obscure URI's and building normalised DB's with Table, Record and Column defined security you cant go wrong with GUID's, try doing that with integer based id's.
My opinion is that it is preferable to use integers and have short, comprehensible URLs.
\\n
As a developer, it feels a little bit awful seeing sequential integers and knowing that some information about total record count is leaking out, but honestly - most people probably don't care, and that information has never really been critical to my businesses.
\\n
Having long ugly UUID URLs seems to me like much more of a turn off to normal users.
YouTube uses 11 characters with base64 encoding which offers 11^64 possibilities, and they are usually pretty manageable to write. I wonder if that would offer better performance than a full on UUID. UUID converted to base 64 would be double the size I believe.
Depending on app you may care or not care about url. If you don't care, just use uuid as is, it's fine.
\\n
If you care, then you will need to decide on url format.
\\n
Best case scenario is a use of unique slug if you ok with never changing it:
\\n
http://example.com/sale/super-duper-phone\\n
\\n
If your url is generated from title and you want to change slug on title change there is a few options. Use it as is and query by uuid (slug is just decoration):
If you don't want uuid or short id in url and want only slug, but do care about seo and user bookmarks, you will need to redirect all request from
\\n
http://example.com/sale/phone-1-title\\n
\\n
to
\\n
http://example.com/sale/phone-1-title-updated\\n
\\n
this will add additional complexity of managing slug history, adding fallback to history for all queries where slug is used and redirects if slugs doesn't match
\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/08"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/5949","https://Stackoverflow.com","https://Stackoverflow.com/users/726/"],"string":"[\n \"https://Stackoverflow.com/questions/5949\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/726/\"\n]"},"input":{"kind":"string","value":"I've always preferred to use long integers as primary keys in databases, for simplicity and (assumed) speed. But when using a [REST](http://en.wikipedia.org/wiki/Representational_State_Transfer) or Rails-like URL scheme for object instances, I'd then end up with URLs like this:\n\n```\nhttp://example.com/user/783\n\n```\n\nAnd then the assumption is that there are also users with IDs of 782, 781, ..., 2, and 1. Assuming that the web app in question is secure enough to prevent people entering other numbers to view other users without authorization, a simple sequentially-assigned surrogate key also \"leaks\" the total number of instances (older than this one), in this case users, which might be privileged information. (For instance, I am user #726 in stackoverflow.)\n\nWould a [UUID](http://en.wikipedia.org/wiki/Uuid)/GUID be a better solution? Then I could set up URLs like this:\n\n```\nhttp://example.com/user/035a46e0-6550-11dd-ad8b-0800200c9a66\n\n```\n\nNot exactly succinct, but there's less implied information about users on display. Sure, it smacks of \"security through obscurity\" which is no substitute for proper security, but it seems at least a little more secure.\n\nIs that benefit worth the cost and complexity of implementing UUIDs for web-addressable object instances? I think that I'd still want to use integer columns as database PKs just to speed up joins.\n\nThere's also the question of in-database representation of UUIDs. I know MySQL stores them as 36-character strings. Postgres seems to have a more efficient internal representation (128 bits?) but I haven't tried it myself. Anyone have any experience with this?\n\n---\n\nUpdate: for those who asked about just using the user name in the URL (e.g., ), that works fine for object instances with names that are unique, but what about the zillions of web app objects that can really only be identified by number? Orders, transactions, invoices, duplicate image names, stackoverflow questions, ..."},"output":{"kind":"string","value":"I can't say about the web side of your question. But uuids are great for n-tier applications. PK generation can be decentralized: each client generates it's own pk without risk of collision. \nAnd the speed difference is generally small.\n\nMake sure your database supports an efficient storage datatype (16 bytes, 128 bits).\nAt the very least you can encode the uuid string in base64 and use char(22).\n\nI've used them extensively with Firebird and do recommend."}}},{"rowIdx":483,"cells":{"qid":{"kind":"number","value":5966,"string":"5,966"},"question":{"kind":"string","value":"
Basically, I've written an API to www.thetvdb.com in Python. The current code can be found here.
\n\n
It grabs data from the API as requested, and has to store the data somehow, and make it available by doing:
\n\n
print tvdbinstance[1][23]['episodename'] # get the name of episode 23 of season 1\n
\n\n
What is the \"best\" way to abstract this data within the Tvdb() class?
\n\n
I originally used a extended Dict() that automatically created sub-dicts (so you could do x[1][2][3][4] = \"something\" without having to do if x[1].has_key(2): x[1][2] = [] and so on)
\n\n
Then I just stored the data by doing self.data[show_id][season_number][episode_number][attribute_name] = \"something\"
\n\n
This worked okay, but there was no easy way of checking if x[3][24] was supposed to exist or not (so I couldn't raise the season_not_found exception).
\n\n
Currently it's using four classes: ShowContainer, Show, Season and Episode. Each one is a very basic dict, which I can easily add extra functionality in (the search() function on Show() for example). Each has a __setitem__, __getitem_ and has_key.
\n\n
This works mostly fine, I can check in Shows if it has that season in it's self.data dict, if not, raise season_not_found. I can also check in Season() if it has that episode and so on.
\n\n
The problem now is it's presenting itself as a dict, but doesn't have all the functionality, and because I'm overriding the __getitem__ and __setitem__ functions, it's easy to accidentally recursively call __getitem__ (so I'm not sure if extending the Dict class will cause problems).
\n\n
The other slight problem is adding data into the dict is a lot more work than the old Dict method (which was self.data[seas_no][ep_no]['attribute'] = 'something'). See _setItem and _setData. It's not too bad, since it's currently only a read-only API interface (so the users of the API should only ever retrieve data, not add more), but it's hardly... Elegant.
\n\n
I think the series-of-classes system is probably the best way, but does anyone have a better idea for storing the data? And would extending the ShowContainer/etc classes with Dict cause problems?
That way you add metadata to any record and search it very easily
\n\n
season_1 = [e for e in episodes if e['season'] == 1]\nbilly_bob = [e for e in episodes if 'actors' in e and 'Billy Bob' in e['actors']]\n\nfor episode in billy_bob:\n print \"Billy bob was in Season %s Episode %s\" % (episode['season'], episode['episode'])\n
I have done something similar in the past and used an in-memory XML document as a quick and dirty hierarchical database for storage. You can store each show/season/episode as an element (nested appropriately) and attributes of these things as xml attributes on the elements. Then you can use XQuery to get info back out.
\n
NOTE: I'm not a Python guy so I don't know what your xml support is like.
\n
NOTE 2: You'll want to profile this because it'll be bigger and slower than the solution you've already got. Likely enough if you are doing some high-volume processing then XML is probably not going to be your friend.
This worked okay, but there was no easy way of checking if x[3][24] was supposed to exist or not (so I couldn't raise the season_not_found exception)
\n
\n\n
There is a way to do it - called in:
\n\n
>>>x={}\n>>>x[1]={}\n>>>x[1][2]={}\n>>>x\n{1: {2: {}}}\n>>> 2 in x[1]\nTrue\n>>> 3 in x[1]\nFalse\n
Bartosz/To clarify \"This worked okay, but there was no easy way of checking if x[3][24] was supposed to exist or not\"
\n\n
x['some show'][3][24] would return season 3, episode 24 of \"some show\". If there was no season 3, I want the pseudo-dict to raise tvdb_seasonnotfound, if \"some show\" doesn't exist, then raise tvdb_shownotfound
\n\n
The current system of a series of classes, each with a __getitem__ - Show checks if self.seasons.has_key(requested_season_number), the Season class checks if self.episodes.has_key(requested_episode_number) and so on.
\n\n
It works, but it there seems to be a lot of repeated code (each class is basically the same, but raises a different error)
OK, what you need is classobj from new module. That would allow you to construct exception classes dynamically (classobj takes a string as an argument for the class name).
\n\n
import new\nmyexc=new.classobj(\"ExcName\",(Exception,),{})\ni=myexc(\"This is the exc msg!\")\nraise i\n
\n\n
this gives you:
\n\n
Traceback (most recent call last):\nFile \"<stdin>\", line 1, in <module>\n__main__.ExcName: This is the exc msg!\n
\n\n
remember that you can always get the class name through:
\n\n
self.__class__.__name__\n
\n\n
So, after some string mangling and concatenation, you should be able to obtain appropriate exception class name and construct a class object using that name and then raise that exception.
\n\n
P.S. - you can also raise strings, but this is deprecated.
That way you add metadata to any record and search it very easily
\\n\\n
season_1 = [e for e in episodes if e['season'] == 1]\\nbilly_bob = [e for e in episodes if 'actors' in e and 'Billy Bob' in e['actors']]\\n\\nfor episode in billy_bob:\\n print \\\"Billy bob was in Season %s Episode %s\\\" % (episode['season'], episode['episode'])\\n
I have done something similar in the past and used an in-memory XML document as a quick and dirty hierarchical database for storage. You can store each show/season/episode as an element (nested appropriately) and attributes of these things as xml attributes on the elements. Then you can use XQuery to get info back out.
\\n
NOTE: I'm not a Python guy so I don't know what your xml support is like.
\\n
NOTE 2: You'll want to profile this because it'll be bigger and slower than the solution you've already got. Likely enough if you are doing some high-volume processing then XML is probably not going to be your friend.
This worked okay, but there was no easy way of checking if x[3][24] was supposed to exist or not (so I couldn't raise the season_not_found exception)
\\n
\\n\\n
There is a way to do it - called in:
\\n\\n
>>>x={}\\n>>>x[1]={}\\n>>>x[1][2]={}\\n>>>x\\n{1: {2: {}}}\\n>>> 2 in x[1]\\nTrue\\n>>> 3 in x[1]\\nFalse\\n
Bartosz/To clarify \\\"This worked okay, but there was no easy way of checking if x[3][24] was supposed to exist or not\\\"
\\n\\n
x['some show'][3][24] would return season 3, episode 24 of \\\"some show\\\". If there was no season 3, I want the pseudo-dict to raise tvdb_seasonnotfound, if \\\"some show\\\" doesn't exist, then raise tvdb_shownotfound
\\n\\n
The current system of a series of classes, each with a __getitem__ - Show checks if self.seasons.has_key(requested_season_number), the Season class checks if self.episodes.has_key(requested_episode_number) and so on.
\\n\\n
It works, but it there seems to be a lot of repeated code (each class is basically the same, but raises a different error)
OK, what you need is classobj from new module. That would allow you to construct exception classes dynamically (classobj takes a string as an argument for the class name).
\\n\\n
import new\\nmyexc=new.classobj(\\\"ExcName\\\",(Exception,),{})\\ni=myexc(\\\"This is the exc msg!\\\")\\nraise i\\n
\\n\\n
this gives you:
\\n\\n
Traceback (most recent call last):\\nFile \\\"<stdin>\\\", line 1, in <module>\\n__main__.ExcName: This is the exc msg!\\n
\\n\\n
remember that you can always get the class name through:
\\n\\n
self.__class__.__name__\\n
\\n\\n
So, after some string mangling and concatenation, you should be able to obtain appropriate exception class name and construct a class object using that name and then raise that exception.
\\n\\n
P.S. - you can also raise strings, but this is deprecated.
\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/08"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/5966","https://Stackoverflow.com","https://Stackoverflow.com/users/745/"],"string":"[\n \"https://Stackoverflow.com/questions/5966\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/745/\"\n]"},"input":{"kind":"string","value":"Basically, I've written an API to www.thetvdb.com in Python. The current code can be found [here](http://github.com/dbr/tvdb_api/tree/master/tvdb_api.py).\n\nIt grabs data from the API as requested, and has to store the data somehow, and make it available by doing:\n\n```\nprint tvdbinstance[1][23]['episodename'] # get the name of episode 23 of season 1\n\n```\n\nWhat is the \"best\" way to abstract this data within the `Tvdb()` class?\n\nI originally used a extended `Dict()` that automatically created sub-dicts (so you could do `x[1][2][3][4] = \"something\"` without having to do `if x[1].has_key(2): x[1][2] = []` and so on)\n\nThen I just stored the data by doing `self.data[show_id][season_number][episode_number][attribute_name] = \"something\"`\n\nThis worked okay, but there was no easy way of checking if `x[3][24]` was supposed to exist or not (so I couldn't raise the season\\_not\\_found exception).\n\nCurrently it's using four classes: `ShowContainer`, `Show`, `Season` and `Episode`. Each one is a very basic dict, which I can easily add extra functionality in (the `search()` function on `Show()` for example). Each has a `__setitem__`, `__getitem_` and `has_key`.\n\nThis works mostly fine, I can check in Shows if it has that season in it's `self.data` dict, if not, `raise season_not_found`. I can also check in `Season()` if it has that episode and so on.\n\nThe problem now is it's presenting itself as a dict, but doesn't have all the functionality, and because I'm overriding the `__getitem__` and `__setitem__` functions, it's easy to accidentally recursively call `__getitem__` (so I'm not sure if extending the `Dict` class will cause problems).\n\nThe other slight problem is adding data into the dict is a lot more work than the old `Dict` method (which was `self.data[seas_no][ep_no]['attribute'] = 'something'`). See `_setItem` and `_setData`. It's not too bad, since it's currently only a read-only API interface (so the users of the API should only ever retrieve data, not add more), but it's hardly... Elegant.\n\nI think the series-of-classes system is probably the best way, but does anyone have a better idea for storing the data? And would extending the `ShowContainer`/etc classes with `Dict` cause problems?"},"output":{"kind":"string","value":"OK, what you need is `classobj` from new module. That would allow you to construct exception classes dynamically (`classobj` takes a string as an argument for the class name). \n\n```\nimport new\nmyexc=new.classobj(\"ExcName\",(Exception,),{})\ni=myexc(\"This is the exc msg!\")\nraise i\n\n```\n\nthis gives you:\n\n```\nTraceback (most recent call last):\nFile \"\", line 1, in \n__main__.ExcName: This is the exc msg!\n\n```\n\nremember that you can always get the class name through:\n\n```\nself.__class__.__name__\n\n```\n\nSo, after some string mangling and concatenation, you should be able to obtain appropriate exception class name and construct a class object using that name and then raise that exception.\n\nP.S. - you can also raise strings, but this is deprecated.\n\n```\nraise(self.__class__.__name__+\"Exception\")\n\n```"}}},{"rowIdx":484,"cells":{"qid":{"kind":"number","value":5982,"string":"5,982"},"question":{"kind":"string","value":"
Does anyone know why when using BindingUtils on the selectedItem property of a ComboBox you get the following warning? Any ideas how to resolve the issue?
\n\n
The binding still works properly, but it would be nice to get rid of the warning.
\n\n
warning: multiple describeType entries for 'selectedItem' on type 'mx.controls::ComboBox':\n<accessor name=\"selectedItem\" access=\"readwrite\" type=\"Object\" declaredBy=\"mx.controls::ComboBase\">\n <metadata name=\"Bindable\">\n <arg key=\"\" value=\"valueCommit\"/>\n </metadata>\n
Here is the code. It is basically a copy of BindingUtils.bindProperty that is setup for a ComboBox so that both the combo box and the model are updated when either of the two change.
\n
public static function bindProperty2(site:Object, prop:String, host:Object, chain:Object, commitOnly:Boolean = false):ChangeWatcher\n{\n var cbx:ComboBox = null;\n if ( site is ComboBox ) { cbx = ComboBox(site); }\n if ( host is ComboBox ) { cbx = ComboBox(host); }\n var labelField:String = "listID";\n \n var w:ChangeWatcher = ChangeWatcher.watch(host, chain, null, commitOnly);\n \n if (w != null)\n {\n var func:Function;\n \n if ( site is ComboBox )\n {\n func = function(event:*):void\n {\n var dp:ICollectionView = ICollectionView(site.dataProvider);\n var selItem:Object = null;\n \n for ( var i:int=0; i<dp.length; i++ )\n {\n var obj:Object = dp[i];\n if ( obj.hasOwnProperty(labelField) )\n {\n var val:String = String(obj[labelField]);\n if ( val == w.getValue() )\n {\n selItem = obj;\n break;\n }\n }\n }\n \n site.selectedItem = selItem;\n };\n \n w.setHandler(func);\n func(null);\n }\n else\n {\n func = function(event:*):void\n {\n var value:Object = w.getValue();\n if ( value == null )\n {\n site[prop] = null;\n }\n else\n {\n site[prop] = String(w.getValue()[labelField]);\n }\n };\n w.setHandler(func);\n func(null);\n }\n }\n \n return w;\n}\n
Here is the code. It is basically a copy of BindingUtils.bindProperty that is setup for a ComboBox so that both the combo box and the model are updated when either of the two change.
\\n
public static function bindProperty2(site:Object, prop:String, host:Object, chain:Object, commitOnly:Boolean = false):ChangeWatcher\\n{\\n var cbx:ComboBox = null;\\n if ( site is ComboBox ) { cbx = ComboBox(site); }\\n if ( host is ComboBox ) { cbx = ComboBox(host); }\\n var labelField:String = "listID";\\n \\n var w:ChangeWatcher = ChangeWatcher.watch(host, chain, null, commitOnly);\\n \\n if (w != null)\\n {\\n var func:Function;\\n \\n if ( site is ComboBox )\\n {\\n func = function(event:*):void\\n {\\n var dp:ICollectionView = ICollectionView(site.dataProvider);\\n var selItem:Object = null;\\n \\n for ( var i:int=0; i<dp.length; i++ )\\n {\\n var obj:Object = dp[i];\\n if ( obj.hasOwnProperty(labelField) )\\n {\\n var val:String = String(obj[labelField]);\\n if ( val == w.getValue() )\\n {\\n selItem = obj;\\n break;\\n }\\n }\\n }\\n \\n site.selectedItem = selItem;\\n };\\n \\n w.setHandler(func);\\n func(null);\\n }\\n else\\n {\\n func = function(event:*):void\\n {\\n var value:Object = w.getValue();\\n if ( value == null )\\n {\\n site[prop] = null;\\n }\\n else\\n {\\n site[prop] = String(w.getValue()[labelField]);\\n }\\n };\\n w.setHandler(func);\\n func(null);\\n }\\n }\\n \\n return w;\\n}\\n
It is better to override the property in question and declare it final.
\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/08"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/5982","https://Stackoverflow.com","https://Stackoverflow.com/users/22/"],"string":"[\n \"https://Stackoverflow.com/questions/5982\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/22/\"\n]"},"input":{"kind":"string","value":"Does anyone know why when using BindingUtils on the selectedItem property of a ComboBox you get the following warning? Any ideas how to resolve the issue?\n\nThe binding still works properly, but it would be nice to get rid of the warning.\n\n```\nwarning: multiple describeType entries for 'selectedItem' on type 'mx.controls::ComboBox':\n\n \n \n \n\n```"},"output":{"kind":"string","value":"It is better to override the property in question and declare it final."}}},{"rowIdx":485,"cells":{"qid":{"kind":"number","value":6007,"string":"6,007"},"question":{"kind":"string","value":"
How do I make Log4net only log Info level logs? Is that even possible? Can you only set a threshold?
\n\n
This is what I have, and it logs Info and above as I would expect. Is there anything i can do to make it only log info?
the accept on match default is true so u can leave it out but if u set it to false u can filter out log4net filters
\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/08"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/6007","https://Stackoverflow.com","https://Stackoverflow.com/users/230/"],"string":"[\n \"https://Stackoverflow.com/questions/6007\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/230/\"\n]"},"input":{"kind":"string","value":"How do I make [Log4net](http://logging.apache.org/log4net/index.html) only log Info level logs? Is that even possible? Can you only set a threshold? \n\nThis is what I have, and it logs Info and above as I would expect. Is there anything i can do to make it only log info?\n\n```xml\n\n \n \n\n\n```"},"output":{"kind":"string","value":"Within the definition of the appender, I believe you can do something like this:\n\n```xml\n\n \n \n \n \n ...\n\n\n```"}}},{"rowIdx":486,"cells":{"qid":{"kind":"number","value":6076,"string":"6,076"},"question":{"kind":"string","value":"
I am building a server control that will search our db and return results. The server control is contains an ASP:Panel. I have set the default button on the panel equal to my button id and have set the form default button equal to my button id.
Works fine in IE & Safari I can type a search term and hit the enter key and it searches fine. If I do it in Firefox I get an alert box saying "Object reference not set to an instance of an a object.
\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/08"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/6076","https://Stackoverflow.com","https://Stackoverflow.com/users/225/"],"string":"[\n \"https://Stackoverflow.com/questions/6076\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/225/\"\n]"},"input":{"kind":"string","value":"I am building a server control that will search our db and return results. The server control is contains an ASP:Panel. I have set the default button on the panel equal to my button id and have set the form default button equal to my button id.\n\nOn the Panel:\n\n```\n MyPanel.DefaultButton = SearchButton.ID\n```\n\nOn the Control:\n\n```\nMe.Page.Form.DefaultButton = SearchButton.UniqueID \n```\n\nWorks fine in IE & Safari I can type a search term and hit the enter key and it searches fine. If I do it in Firefox I get an alert box saying \"Object reference not set to an instance of an a object.\n\nAnyone run across this before?"},"output":{"kind":"string","value":"Ends up this resolved my issue:\n\n```\n SearchButton.UseSubmitBehavior = False\n\n```"}}},{"rowIdx":487,"cells":{"qid":{"kind":"number","value":6110,"string":"6,110"},"question":{"kind":"string","value":"
I've been handed a table with about 18000 rows. Each record describes the location of one customer. The issue is, that when the person created the table, they did not add a field for \"Company Name\", only \"Location Name,\" and one company can have many locations.
\n\n
For example, here are some records that describe the same customer:
\n\n
Location Table
\n\n
ID Location_Name \n 1 TownShop#1 \n 2 Town Shop - Loc 2 \n 3 The Town Shop \n 4 TTS - Someplace \n 5 Town Shop,the 3 \n 6 Toen Shop4 \n
\n\n
My goal is to make it look like:
\n\n
Location Table
\n\n
ID Company_ID Location_Name \n 1 1 Town Shop#1 \n 2 1 Town Shop - Loc 2 \n 3 1 The Town Shop \n 4 1 TTS - Someplace \n 5 1 Town Shop,the 3 \n 6 1 Toen Shop4 \n
\n\n
Company Table
\n\n
Company_ID Company_Name \n 1 The Town Shop \n
\n\n
There is no \"Company\" table, I will have to generate the Company Name list from the most descriptive or best Location Name that represents the multiple locations.
\n\n
Currently I am thinking I need to generate a list of Location Names that are similar, and then and go through that list by hand.
\n\n
Any suggestions on how I can approach this is appreciated.
\n\n
@Neall, Thank you for your statement, but unfortunately, each location name is distinct, there are no duplicate location names, only similar. So in the results from your statement \"repcount\" is 1 in each row.
\n\n
@yukondude, Your step 4 is the heart of my question.
I've had to do this before. The only real way to do it is to manually match up the various locations. Use your database's console interface and grouping select statements. First, add your \"Company Name\" field. Then:
\n\n
SELECT count(*) AS repcount, \"Location Name\" FROM mytable\n WHERE \"Company Name\" IS NULL\n GROUP BY \"Location Name\"\n ORDER BY repcount DESC\n LIMIT 5;\n
\n\n
Figure out what company the location at the top of the list belongs to and then update your company name field with an UPDATE ... WHERE \"Location Name\" = \"The Location\" statement.
\n\n
P.S. - You should really break your company names and location names out into separate tables and refer to them by their primary keys.
\n\n
Update: - Wow - no duplicates? How many records do you have?
Please update the question, do you have a list of CompanyNames available to you? I ask because you maybe able to use Levenshtein algo to find a relationship between your list of CompanyNames and LocationNames.
\n\n\n\n
Update
\n\n
\n
There is not a list of Company Names, I will have to generate the company name from the most descriptive or best Location Name that represents the multiple locations.
\n
\n\n
Okay... try this:
\n\n\n
Build a list of candidate CompanyNames by finding LocationNames made up of mostly or all alphabetic characters. You can use regular expressions for this. Store this list in a separate table.
\n
Sort that list alphabetically and (manually) determine which entries should be CompanyNames.
\n
Compare each CompanyName to each LocationName and come up with a match score (use Levenshtein or some other string matching algo). Store the result in a separate table.
\n
Set a threshold score such that any MatchScore < Threshold will not be considered a match for a given CompanyName.
\n
Manually vet through the LocationNames by CompanyName | LocationName | MatchScore, and figure out which ones actually match. Ordering by MatchScore should make the process less painful.
\n\n\n
The whole purpose of the above actions is to automate parts and limit the scope of your problem. It's far from perfect, but will hopefully save you the trouble of going through 18K records by hand.
I was going to recommend some complicated token matching algorithm but it's really tricky to get right and if you're data does not have a lot of correlation (typos, etc) then it's not going to give very good results.
\n\n
I would recommend you submit a job to the Amazon Mechanical Turk and let a human sort it out.
Ideally, you'd probably want a separate table named Company and then a company_id column in this \"Location\" table that is a foreign key to the Company table's primary key, likely called id. That would avoid a fair bit of text duplication in this table (over 18,000 rows, an integer foreign key would save quite a bit of space over a varchar column).
\n\n
But you're still faced with a method for loading that Company table and then properly associating it with the rows in Location. There's no general solution, but you could do something along these lines:
\n\n\n
Create the Company table, with an id column that auto-increments (depends on your RDBMS).
\n
Find all of the unique company names and insert them into Company.
\n
Add a column, company_id, to Location that accepts NULLs (for now) and that is a foreign key of the Company.id column.
\n
For each row in Location, determine the corresponding company, and UPDATE that row's company_id column with that company's id. This is likely the most challenging step. If your data is like what you show in the example, you'll likely have to take many runs at this with various string matching approaches.
\n
Once all rows in Location have a company_id value, then you can ALTER the Company table to add a NOT NULL constraint to the company_id column (assuming that every location must have a company, which seems reasonable).
\n\n\n
If you can make a copy of your Location table, you can gradually build up a series of SQL statements to populate the company_id foreign key. If you make a mistake, you can just start over and rerun the script up to the point of failure.
Yes, that step 4 from my previous post is a doozy.
\n\n
No matter what, you're probably going to have to do some of this by hand, but you may be able to automate the bulk of it. For the example locations you gave, a query like the following would set the appropriate company_id value:
\n\n
UPDATE Location\nSET Company_ID = 1\nWHERE (LOWER(Location_Name) LIKE '%to_n shop%'\nOR LOWER(Location_Name) LIKE '%tts%')\nAND Company_ID IS NULL;\n
\n\n
I believe that would match your examples (I added the IS NULL part to not overwrite previously set Company_ID values), but of course in 18,000 rows you're going to have to be pretty inventive to handle the various combinations.
\n\n
Something else that might help would be to use the names in Company to generate queries like the one above. You could do something like the following (in MySQL):
\n\n
SELECT CONCAT('UPDATE Location SET Company_ID = ',\n Company_ID, ' WHERE LOWER(Location_Name) LIKE ',\n LOWER(REPLACE(Company_Name), ' ', '%'), ' AND Company_ID IS NULL;')\nFROM Company;\n
\n\n
Then just run the statements that it produces. That could do a lot of the grunge work for you.
I've had to do this before. The only real way to do it is to manually match up the various locations. Use your database's console interface and grouping select statements. First, add your \\\"Company Name\\\" field. Then:
\\n\\n
SELECT count(*) AS repcount, \\\"Location Name\\\" FROM mytable\\n WHERE \\\"Company Name\\\" IS NULL\\n GROUP BY \\\"Location Name\\\"\\n ORDER BY repcount DESC\\n LIMIT 5;\\n
\\n\\n
Figure out what company the location at the top of the list belongs to and then update your company name field with an UPDATE ... WHERE \\\"Location Name\\\" = \\\"The Location\\\" statement.
\\n\\n
P.S. - You should really break your company names and location names out into separate tables and refer to them by their primary keys.
\\n\\n
Update: - Wow - no duplicates? How many records do you have?
Please update the question, do you have a list of CompanyNames available to you? I ask because you maybe able to use Levenshtein algo to find a relationship between your list of CompanyNames and LocationNames.
\\n\\n\\n\\n
Update
\\n\\n
\\n
There is not a list of Company Names, I will have to generate the company name from the most descriptive or best Location Name that represents the multiple locations.
\\n
\\n\\n
Okay... try this:
\\n\\n\\n
Build a list of candidate CompanyNames by finding LocationNames made up of mostly or all alphabetic characters. You can use regular expressions for this. Store this list in a separate table.
\\n
Sort that list alphabetically and (manually) determine which entries should be CompanyNames.
\\n
Compare each CompanyName to each LocationName and come up with a match score (use Levenshtein or some other string matching algo). Store the result in a separate table.
\\n
Set a threshold score such that any MatchScore < Threshold will not be considered a match for a given CompanyName.
\\n
Manually vet through the LocationNames by CompanyName | LocationName | MatchScore, and figure out which ones actually match. Ordering by MatchScore should make the process less painful.
\\n\\n\\n
The whole purpose of the above actions is to automate parts and limit the scope of your problem. It's far from perfect, but will hopefully save you the trouble of going through 18K records by hand.
I was going to recommend some complicated token matching algorithm but it's really tricky to get right and if you're data does not have a lot of correlation (typos, etc) then it's not going to give very good results.
\\n\\n
I would recommend you submit a job to the Amazon Mechanical Turk and let a human sort it out.
Ideally, you'd probably want a separate table named Company and then a company_id column in this \\\"Location\\\" table that is a foreign key to the Company table's primary key, likely called id. That would avoid a fair bit of text duplication in this table (over 18,000 rows, an integer foreign key would save quite a bit of space over a varchar column).
\\n\\n
But you're still faced with a method for loading that Company table and then properly associating it with the rows in Location. There's no general solution, but you could do something along these lines:
\\n\\n\\n
Create the Company table, with an id column that auto-increments (depends on your RDBMS).
\\n
Find all of the unique company names and insert them into Company.
\\n
Add a column, company_id, to Location that accepts NULLs (for now) and that is a foreign key of the Company.id column.
\\n
For each row in Location, determine the corresponding company, and UPDATE that row's company_id column with that company's id. This is likely the most challenging step. If your data is like what you show in the example, you'll likely have to take many runs at this with various string matching approaches.
\\n
Once all rows in Location have a company_id value, then you can ALTER the Company table to add a NOT NULL constraint to the company_id column (assuming that every location must have a company, which seems reasonable).
\\n\\n\\n
If you can make a copy of your Location table, you can gradually build up a series of SQL statements to populate the company_id foreign key. If you make a mistake, you can just start over and rerun the script up to the point of failure.
Yes, that step 4 from my previous post is a doozy.
\\n\\n
No matter what, you're probably going to have to do some of this by hand, but you may be able to automate the bulk of it. For the example locations you gave, a query like the following would set the appropriate company_id value:
\\n\\n
UPDATE Location\\nSET Company_ID = 1\\nWHERE (LOWER(Location_Name) LIKE '%to_n shop%'\\nOR LOWER(Location_Name) LIKE '%tts%')\\nAND Company_ID IS NULL;\\n
\\n\\n
I believe that would match your examples (I added the IS NULL part to not overwrite previously set Company_ID values), but of course in 18,000 rows you're going to have to be pretty inventive to handle the various combinations.
\\n\\n
Something else that might help would be to use the names in Company to generate queries like the one above. You could do something like the following (in MySQL):
\\n\\n
SELECT CONCAT('UPDATE Location SET Company_ID = ',\\n Company_ID, ' WHERE LOWER(Location_Name) LIKE ',\\n LOWER(REPLACE(Company_Name), ' ', '%'), ' AND Company_ID IS NULL;')\\nFROM Company;\\n
\\n\\n
Then just run the statements that it produces. That could do a lot of the grunge work for you.
\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/08"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/6110","https://Stackoverflow.com","https://Stackoverflow.com/users/754/"],"string":"[\n \"https://Stackoverflow.com/questions/6110\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/754/\"\n]"},"input":{"kind":"string","value":"I've been handed a table with about 18000 rows. Each record describes the location of one customer. The issue is, that when the person created the table, they did not add a field for \"Company Name\", only \"Location Name,\" and one company can have many locations.\n\nFor example, here are some records that describe the same customer:\n\n**Location Table**\n\n```\n ID Location_Name \n 1 TownShop#1 \n 2 Town Shop - Loc 2 \n 3 The Town Shop \n 4 TTS - Someplace \n 5 Town Shop,the 3 \n 6 Toen Shop4 \n\n```\n\nMy goal is to make it look like:\n\n**Location Table**\n\n```\n ID Company_ID Location_Name \n 1 1 Town Shop#1 \n 2 1 Town Shop - Loc 2 \n 3 1 The Town Shop \n 4 1 TTS - Someplace \n 5 1 Town Shop,the 3 \n 6 1 Toen Shop4 \n\n```\n\n**Company Table**\n\n```\n Company_ID Company_Name \n 1 The Town Shop \n\n```\n\nThere is no \"Company\" table, I will have to generate the Company Name list from the most descriptive or best Location Name that represents the multiple locations.\n\nCurrently I am thinking I need to generate a list of Location Names that are similar, and then and go through that list by hand.\n\nAny suggestions on how I can approach this is appreciated.\n\n**@Neall, Thank you for your statement, but unfortunately, each location name is distinct, there are no duplicate location names, only similar. So in the results from your statement \"repcount\" is 1 in each row.**\n\n**@yukondude, Your step 4 is the heart of my question.**"},"output":{"kind":"string","value":"Please update the question, do you have a list of CompanyNames available to you? I ask because you maybe able to use Levenshtein algo to find a relationship between your list of CompanyNames and LocationNames.\n\n---\n\n**Update**\n\n> \n> There is not a list of Company Names, I will have to generate the company name from the most descriptive or best Location Name that represents the multiple locations.\n> \n> \n> \n\nOkay... try this:\n\n1. Build a list of candidate CompanyNames by finding LocationNames made up of mostly or all alphabetic characters. You can use [regular expressions](http://us2.php.net/manual/en/book.pcre.php) for this. Store this list in a separate table.\n2. Sort that list alphabetically and (manually) determine which entries should be CompanyNames.\n3. Compare each CompanyName to each LocationName and come up with a match score (use [Levenshtein](http://us2.php.net/manual/en/function.levenshtein.php) or some other string matching algo). Store the result in a separate table.\n4. Set a threshold score such that any MatchScore < Threshold will not be considered a match for a given CompanyName.\n5. Manually vet through the LocationNames by CompanyName | LocationName | MatchScore, and figure out which ones actually match. Ordering by MatchScore should make the process less painful.\n\nThe whole purpose of the above actions is to automate parts and limit the scope of your problem. It's far from perfect, but will hopefully save you the trouble of going through 18K records by hand."}}},{"rowIdx":488,"cells":{"qid":{"kind":"number","value":6126,"string":"6,126"},"question":{"kind":"string","value":"
It's something that's bugged me in every language I've used, I have an if statement but the conditional part has so many checks that I have to split it over multiple lines, use a nested if statement or just accept that it's ugly and move on with my life.
\n\n
Are there any other methods that you've found that might be of use to me and anybody else that's hit the same problem?
First, I'd remove all the == true parts, that would make it 50% shorter ;)
\n\n
When I have big condition I search for the reasons. Sometimes I see I should use polymorphism, sometimes I need to add some state object. Basically, it implies a refactoring is needed (a code smell).
\n\n
Sometimes I use De-Morgan's laws to simplify boolean expressions a bit.
Make sure you give your variables name that actualy indicate intention rather than function. This will greatly help the developer maintaining your code... it could be YOU!
Also, it's very hard to refactor abstract code examples. If you showed a specific example it would be easier to identify a better pattern to fit the problem.
\n\n
It's no better, but what I've done in the past:\n(The following method prevents short-circuiting boolean testing, all tests are run even if the first is false. Not a recommended pattern unless you know you need to always execute all the code before returning -- Thanks to ptomato for spotting my mistake!)
\n\n
\n
boolean ok = cond1; \n ok &= cond2; \n ok &= cond3; \n ok &= cond4; \n ok &= cond5; \n ok &= cond6;
\n
\n\n
Which is the same as: (not the same, see above note!)
Check out Implementation Patterns by Kent Beck. There is a particular pattern I am thinking of that may help in this situation... it is called \"Guards\". Rather than having tons of conditions, you can break them out into a guard, which makes it clear which are the adverse conditions in a method.
\n\n
So for example, if you have a method that does something, but there are certain conditions where it shouldn't do something, rather than:
\n\n
public void doSomething() {\n if (condition1 && condition2 && condition3 && condition4) {\n // do something\n }\n}\n
\n\n
You could change it to:
\n\n
public void doSomething() {\n if (!condition1) {\n return;\n }\n\n if (!condition2) {\n return;\n }\n\n if (!condition3) {\n return;\n }\n\n if (!condition4) {\n return;\n }\n\n // do something\n}\n
\n\n
It's a bit more verbose, but a lot more readable, especially when you start having weird nesting, the guard can help (combined with extracting methods).
There are two issues to address here: readability and understandability
\n\n
The \"readability\" solution is a style issue and as such is open to interpretation. My preference is this:
\n\n
if (var1 == true && // Explanation of the check\n var2 == true && // Explanation of the check\n var3 == true && // Explanation of the check\n var4 == true && // Explanation of the check\n var5 == true && // Explanation of the check\n var6 == true) // Explanation of the check\n { }\n
\n\n
or this:
\n\n
if (var1 && // Explanation of the check\n var2 && // Explanation of the check\n var3 && // Explanation of the check\n var4 && // Explanation of the check\n var5 && // Explanation of the check\n var6) // Explanation of the check\n { }\n
\n\n
That said, this kind of complex check can be quite difficult to mentally parse while scanning the code (especially if you are not the original author). Consider creating a helper method to abstract some of the complexity away:
\n\n
/// <Summary>\n/// Tests whether all the conditions are appropriately met\n/// </Summary>\nprivate bool AreAllConditionsMet (\n bool var1,\n bool var2,\n bool var3,\n bool var4,\n bool var5,\n bool var6)\n{\n return (\n var1 && // Explanation of the check\n var2 && // Explanation of the check\n var3 && // Explanation of the check\n var4 && // Explanation of the check\n var5 && // Explanation of the check\n var6); // Explanation of the check\n}\n\nprivate void SomeMethod()\n{\n // Do some stuff (including declare the required variables)\n if (AreAllConditionsMet (var1, var2, var3, var4, var5, var6))\n {\n // Do something\n }\n}\n
\n\n
Now when visually scanning the \"SomeMethod\" method, the actual complexity of the test logic is hidden but the semantic meaning is preserved for humans to understand at a high-level. If the developer really needs to understand the details, the AreAllConditionsMet method can be examined.
\n\n
This is formally known as the \"Decompose Conditional\" refactoring pattern I think. Tools like Resharper or Refactor Pro! can making doing this kind of refactoring easy!
\n\n
In all cases, the key to having readable and understandable code is to use realistic variable names. While I understand this is a contrived example, \"var1\", \"var2\", etc are not acceptable variable names. They should have a name which reflects the underlying nature of the data they represent.
If you happen to be programming in Python, it's a cinch with the built-in all() function applied over the list of your variables (I'll just use Boolean literals here):
\n\n
>>> L = [True, True, True, False, True]\n>>> all(L) # True, only if all elements of L are True.\nFalse\n>>> any(L) # True, if any elements of L are True.\nTrue\n
\n\n
Is there any corresponding function in your language (C#? Java?). If so, that's likely the cleanest approach.
if (var1 == true) {\n if (var2 == true) {\n if (var3 == true) {\n ...\n }\n }\n}\n
\n\n
Then you can also respond to cases where something isn't true. For example, if you're validating input, you could give the user a tip for how to properly format it, or whatever.
You are correct that when using the single '&' operator that both sides of the expression evaluate. However, when using the '&&' operator (at least in C#) then the first expression to return false is the last expression evaluated. This makes putting the evaulation before the FOR statement just as good as any other way of doing it.
Actually, these two things are not the same in most languages. The second expression will typically stop being evaluated as soon as one of the conditions is false, which can be a big performance improvement if evaluating the conditions is expensive.
\n\n
For readability, I personally prefer Mike Stone's proposal above. It's easy to verbosely comment and preserves all of the computational advantages of being able to early out. You can also do the same technique inline in a function if it'd confuse the organization of your code to move the conditional evaluation far away from your other function. It's a bit cheesy, but you can always do something like:
\n\n
do {\n if (!cond1)\n break;\n if (!cond2)\n break;\n if (!cond3)\n break;\n ...\n DoSomething();\n} while (false);\n
\n\n
the while (false) is kind of cheesy. I wish languages had a scoping operator called \"once\" or something that you could break out of easily.
Try looking at Functors and Predicates. The Apache Commons project has a great set of objects to allow you to encapsulate conditional logic into objects. Example of their use is available on O'reilly here. Excerpt of code example:
Now the details of all those isHonorRoll predicates and the closures used to evaluate them:
\n\n
import org.apache.commons.collections.Closure;\nimport org.apache.commons.collections.Predicate;\n\n// Anonymous Predicate that decides if a student \n// has made the honor roll.\nPredicate isHonorRoll = new Predicate() {\n public boolean evaluate(Object object) {\n Student s = (Student) object;\n\n return( ( s.getGrade().equals( \"A\" ) ) ||\n ( s.getGrade().equals( \"B\" ) && \n s.getAttendance() == PERFECT ) );\n }\n};\n\n// Anonymous Predicate that decides if a student\n// has a problem.\nPredicate isProblem = new Predicate() {\n public boolean evaluate(Object object) {\n Student s = (Student) object;\n\n return ( ( s.getGrade().equals( \"D\" ) || \n s.getGrade().equals( \"F\" ) ) ||\n s.getStatus() == SUSPENDED );\n }\n};\n\n// Anonymous Closure that adds a student to the \n// honor roll\nClosure addToHonorRoll = new Closure() {\n public void execute(Object object) {\n Student s = (Student) object;\n\n // Add an award to student record\n s.addAward( \"honor roll\", 2005 );\n Database.saveStudent( s );\n }\n};\n\n// Anonymous Closure flags a student for attention\nClosure flagForAttention = new Closure() {\n public void execute(Object object) {\n Student s = (Student) object;\n\n // Flag student for special attention\n s.addNote( \"talk to student\", 2005 );\n s.addNote( \"meeting with parents\", 2005 );\n Database.saveStudent( s );\n }\n};\n
Steve Mcconell's advice, from Code Complete:\nUse a multi-dimensional table. Each variable serves as an index to the table,\nand the if statement turns into a table lookup. For example if (size == 3 && weight > 70)\ntranslates into the table entry decision[size][weight_group]
If I was doing it in Perl, This is how I might run the checks.
\n\n
{\n last unless $var1;\n last unless $var2;\n last unless $var3;\n last unless $var4;\n last unless $var5;\n last unless $var6;\n\n ... # Place Code Here\n}\n
\n\n
If you plan on using this over a subroutine replace every instance of last with return;
if (condition_A) {\n if (condition_B) {\n if (condition_C) {\n if (condition_D) {\n if (condition_E) {\n if (condition_F) {\n ...\n }\n }\n }\n }\n }\n }\n
if (condition_A && condition_B) {\n do_this_same_thing();\n }\n if (condition_C && (condition_D) {\n do_this_same_thing();\n }\n if (condition_E && condition_F) {\n do_this_same_thing();\n }\n
\n\n
Most of the static analysis tools for examining code will complain if multiple conditional expressions do not use explicit parenthesis dictating expression analysis, instead of relying on operator precedence rules and fewer parenthesis.
\n\n
Vertical alignment at the same indent level of open/close braces {}, open close parenthesis (), conditional expressions with parenthesis and operators on the left is an very useful practice, which greatly ENHANCES readability and clarity of the code as opposed to jamming everything that can possibly be jammed onto a single line, sans vertical alignment, spaces or parenthesis
\n\n
Operator precedence rules are tricky, e.g. && has higher precedence than ||, but | has precedence than &&
\n\n
So, ...
\n\n
if (expr_A & expr_B || expr_C | expr_D & expr_E || expr_E && expr_F & expr_G || expr_H {\n }\n
\n\n
is a really easy multiple conditional expression for mere humans to read and evaluate improperly.
\n\n
if ( ( (expr_A)\n & (expr_B)\n )\n || ( (expr_C)\n | ( (expr_D)\n & (expr_E)\n )\n )\n || ( (expr_E)\n && ( (expr_F)\n & (expr_G)\n )\n )\n || (expr_H)\n )\n {\n }\n
\n\n
There is nothing wrong with horizontal space (linefeeds), vertical alignment, or explicit parenthesis guiding expression evaluation, all of which ENHANCES readability and clarity
First, I'd remove all the == true parts, that would make it 50% shorter ;)
\\n\\n
When I have big condition I search for the reasons. Sometimes I see I should use polymorphism, sometimes I need to add some state object. Basically, it implies a refactoring is needed (a code smell).
\\n\\n
Sometimes I use De-Morgan's laws to simplify boolean expressions a bit.
Make sure you give your variables name that actualy indicate intention rather than function. This will greatly help the developer maintaining your code... it could be YOU!
Also, it's very hard to refactor abstract code examples. If you showed a specific example it would be easier to identify a better pattern to fit the problem.
\\n\\n
It's no better, but what I've done in the past:\\n(The following method prevents short-circuiting boolean testing, all tests are run even if the first is false. Not a recommended pattern unless you know you need to always execute all the code before returning -- Thanks to ptomato for spotting my mistake!)
\\n\\n
\\n
boolean ok = cond1; \\n ok &= cond2; \\n ok &= cond3; \\n ok &= cond4; \\n ok &= cond5; \\n ok &= cond6;
\\n
\\n\\n
Which is the same as: (not the same, see above note!)
Check out Implementation Patterns by Kent Beck. There is a particular pattern I am thinking of that may help in this situation... it is called \\\"Guards\\\". Rather than having tons of conditions, you can break them out into a guard, which makes it clear which are the adverse conditions in a method.
\\n\\n
So for example, if you have a method that does something, but there are certain conditions where it shouldn't do something, rather than:
\\n\\n
public void doSomething() {\\n if (condition1 && condition2 && condition3 && condition4) {\\n // do something\\n }\\n}\\n
\\n\\n
You could change it to:
\\n\\n
public void doSomething() {\\n if (!condition1) {\\n return;\\n }\\n\\n if (!condition2) {\\n return;\\n }\\n\\n if (!condition3) {\\n return;\\n }\\n\\n if (!condition4) {\\n return;\\n }\\n\\n // do something\\n}\\n
\\n\\n
It's a bit more verbose, but a lot more readable, especially when you start having weird nesting, the guard can help (combined with extracting methods).
There are two issues to address here: readability and understandability
\\n\\n
The \\\"readability\\\" solution is a style issue and as such is open to interpretation. My preference is this:
\\n\\n
if (var1 == true && // Explanation of the check\\n var2 == true && // Explanation of the check\\n var3 == true && // Explanation of the check\\n var4 == true && // Explanation of the check\\n var5 == true && // Explanation of the check\\n var6 == true) // Explanation of the check\\n { }\\n
\\n\\n
or this:
\\n\\n
if (var1 && // Explanation of the check\\n var2 && // Explanation of the check\\n var3 && // Explanation of the check\\n var4 && // Explanation of the check\\n var5 && // Explanation of the check\\n var6) // Explanation of the check\\n { }\\n
\\n\\n
That said, this kind of complex check can be quite difficult to mentally parse while scanning the code (especially if you are not the original author). Consider creating a helper method to abstract some of the complexity away:
\\n\\n
/// <Summary>\\n/// Tests whether all the conditions are appropriately met\\n/// </Summary>\\nprivate bool AreAllConditionsMet (\\n bool var1,\\n bool var2,\\n bool var3,\\n bool var4,\\n bool var5,\\n bool var6)\\n{\\n return (\\n var1 && // Explanation of the check\\n var2 && // Explanation of the check\\n var3 && // Explanation of the check\\n var4 && // Explanation of the check\\n var5 && // Explanation of the check\\n var6); // Explanation of the check\\n}\\n\\nprivate void SomeMethod()\\n{\\n // Do some stuff (including declare the required variables)\\n if (AreAllConditionsMet (var1, var2, var3, var4, var5, var6))\\n {\\n // Do something\\n }\\n}\\n
\\n\\n
Now when visually scanning the \\\"SomeMethod\\\" method, the actual complexity of the test logic is hidden but the semantic meaning is preserved for humans to understand at a high-level. If the developer really needs to understand the details, the AreAllConditionsMet method can be examined.
\\n\\n
This is formally known as the \\\"Decompose Conditional\\\" refactoring pattern I think. Tools like Resharper or Refactor Pro! can making doing this kind of refactoring easy!
\\n\\n
In all cases, the key to having readable and understandable code is to use realistic variable names. While I understand this is a contrived example, \\\"var1\\\", \\\"var2\\\", etc are not acceptable variable names. They should have a name which reflects the underlying nature of the data they represent.
If you happen to be programming in Python, it's a cinch with the built-in all() function applied over the list of your variables (I'll just use Boolean literals here):
\\n\\n
>>> L = [True, True, True, False, True]\\n>>> all(L) # True, only if all elements of L are True.\\nFalse\\n>>> any(L) # True, if any elements of L are True.\\nTrue\\n
\\n\\n
Is there any corresponding function in your language (C#? Java?). If so, that's likely the cleanest approach.
if (var1 == true) {\\n if (var2 == true) {\\n if (var3 == true) {\\n ...\\n }\\n }\\n}\\n
\\n\\n
Then you can also respond to cases where something isn't true. For example, if you're validating input, you could give the user a tip for how to properly format it, or whatever.
You are correct that when using the single '&' operator that both sides of the expression evaluate. However, when using the '&&' operator (at least in C#) then the first expression to return false is the last expression evaluated. This makes putting the evaulation before the FOR statement just as good as any other way of doing it.
Actually, these two things are not the same in most languages. The second expression will typically stop being evaluated as soon as one of the conditions is false, which can be a big performance improvement if evaluating the conditions is expensive.
\\n\\n
For readability, I personally prefer Mike Stone's proposal above. It's easy to verbosely comment and preserves all of the computational advantages of being able to early out. You can also do the same technique inline in a function if it'd confuse the organization of your code to move the conditional evaluation far away from your other function. It's a bit cheesy, but you can always do something like:
\\n\\n
do {\\n if (!cond1)\\n break;\\n if (!cond2)\\n break;\\n if (!cond3)\\n break;\\n ...\\n DoSomething();\\n} while (false);\\n
\\n\\n
the while (false) is kind of cheesy. I wish languages had a scoping operator called \\\"once\\\" or something that you could break out of easily.
Try looking at Functors and Predicates. The Apache Commons project has a great set of objects to allow you to encapsulate conditional logic into objects. Example of their use is available on O'reilly here. Excerpt of code example:
Now the details of all those isHonorRoll predicates and the closures used to evaluate them:
\\n\\n
import org.apache.commons.collections.Closure;\\nimport org.apache.commons.collections.Predicate;\\n\\n// Anonymous Predicate that decides if a student \\n// has made the honor roll.\\nPredicate isHonorRoll = new Predicate() {\\n public boolean evaluate(Object object) {\\n Student s = (Student) object;\\n\\n return( ( s.getGrade().equals( \\\"A\\\" ) ) ||\\n ( s.getGrade().equals( \\\"B\\\" ) && \\n s.getAttendance() == PERFECT ) );\\n }\\n};\\n\\n// Anonymous Predicate that decides if a student\\n// has a problem.\\nPredicate isProblem = new Predicate() {\\n public boolean evaluate(Object object) {\\n Student s = (Student) object;\\n\\n return ( ( s.getGrade().equals( \\\"D\\\" ) || \\n s.getGrade().equals( \\\"F\\\" ) ) ||\\n s.getStatus() == SUSPENDED );\\n }\\n};\\n\\n// Anonymous Closure that adds a student to the \\n// honor roll\\nClosure addToHonorRoll = new Closure() {\\n public void execute(Object object) {\\n Student s = (Student) object;\\n\\n // Add an award to student record\\n s.addAward( \\\"honor roll\\\", 2005 );\\n Database.saveStudent( s );\\n }\\n};\\n\\n// Anonymous Closure flags a student for attention\\nClosure flagForAttention = new Closure() {\\n public void execute(Object object) {\\n Student s = (Student) object;\\n\\n // Flag student for special attention\\n s.addNote( \\\"talk to student\\\", 2005 );\\n s.addNote( \\\"meeting with parents\\\", 2005 );\\n Database.saveStudent( s );\\n }\\n};\\n
Steve Mcconell's advice, from Code Complete:\\nUse a multi-dimensional table. Each variable serves as an index to the table,\\nand the if statement turns into a table lookup. For example if (size == 3 && weight > 70)\\ntranslates into the table entry decision[size][weight_group]
If I was doing it in Perl, This is how I might run the checks.
\\n\\n
{\\n last unless $var1;\\n last unless $var2;\\n last unless $var3;\\n last unless $var4;\\n last unless $var5;\\n last unless $var6;\\n\\n ... # Place Code Here\\n}\\n
\\n\\n
If you plan on using this over a subroutine replace every instance of last with return;
if (condition_A) {\\n if (condition_B) {\\n if (condition_C) {\\n if (condition_D) {\\n if (condition_E) {\\n if (condition_F) {\\n ...\\n }\\n }\\n }\\n }\\n }\\n }\\n
if (condition_A && condition_B) {\\n do_this_same_thing();\\n }\\n if (condition_C && (condition_D) {\\n do_this_same_thing();\\n }\\n if (condition_E && condition_F) {\\n do_this_same_thing();\\n }\\n
\\n\\n
Most of the static analysis tools for examining code will complain if multiple conditional expressions do not use explicit parenthesis dictating expression analysis, instead of relying on operator precedence rules and fewer parenthesis.
\\n\\n
Vertical alignment at the same indent level of open/close braces {}, open close parenthesis (), conditional expressions with parenthesis and operators on the left is an very useful practice, which greatly ENHANCES readability and clarity of the code as opposed to jamming everything that can possibly be jammed onto a single line, sans vertical alignment, spaces or parenthesis
\\n\\n
Operator precedence rules are tricky, e.g. && has higher precedence than ||, but | has precedence than &&
\\n\\n
So, ...
\\n\\n
if (expr_A & expr_B || expr_C | expr_D & expr_E || expr_E && expr_F & expr_G || expr_H {\\n }\\n
\\n\\n
is a really easy multiple conditional expression for mere humans to read and evaluate improperly.
\\n\\n
if ( ( (expr_A)\\n & (expr_B)\\n )\\n || ( (expr_C)\\n | ( (expr_D)\\n & (expr_E)\\n )\\n )\\n || ( (expr_E)\\n && ( (expr_F)\\n & (expr_G)\\n )\\n )\\n || (expr_H)\\n )\\n {\\n }\\n
\\n\\n
There is nothing wrong with horizontal space (linefeeds), vertical alignment, or explicit parenthesis guiding expression evaluation, all of which ENHANCES readability and clarity
\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/08"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/6126","https://Stackoverflow.com","https://Stackoverflow.com/users/1384652/"],"string":"[\n \"https://Stackoverflow.com/questions/6126\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/1384652/\"\n]"},"input":{"kind":"string","value":"It's something that's bugged me in every language I've used, I have an if statement but the conditional part has so many checks that I have to split it over multiple lines, use a nested if statement or just accept that it's ugly and move on with my life.\n\nAre there any other methods that you've found that might be of use to me and anybody else that's hit the same problem?\n\nExample, all on one line:\n\n```\nif (var1 = true && var2 = true && var2 = true && var3 = true && var4 = true && var5 = true && var6 = true)\n{\n\n```\n\nExample, multi-line:\n\n```\nif (var1 = true && var2 = true && var2 = true\n && var3 = true && var4 = true && var5 = true\n && var6 = true)\n{\n\n```\n\nExample-nested:\n\n```\nif (var1 = true && var2 = true && var2 = true && var3 = true)\n{\n if (var4 = true && var5 = true && var6 = true)\n {\n\n```"},"output":{"kind":"string","value":"Separate the condition in several booleans and then use a master boolean as the condition.\n\n```\nbool isOpaque = object.Alpha == 1.0f;\nbool isDrawable = object.CanDraw && object.Layer == currentLayer;\nbool isHidden = hideList.Find(object);\n\nbool isVisible = isOpaque && isDrawable && ! isHidden;\n\nif(isVisible)\n{\n // ...\n}\n\n```\n\nBetter yet:\n\n```\npublic bool IsVisible {\n get\n {\n bool isOpaque = object.Alpha == 1.0f;\n bool isDrawable = object.CanDraw && object.Layer == currentLayer;\n bool isHidden = hideList.Find(object);\n\n return isOpaque && isDrawable && ! isHidden;\n }\n}\n\nvoid Draw()\n{\n if(IsVisible)\n {\n // ...\n }\n}\n\n```\n\nMake sure you give your variables name that actualy indicate intention rather than function. This will greatly help the developer maintaining your code... it could be YOU!"}}},{"rowIdx":489,"cells":{"qid":{"kind":"number","value":6134,"string":"6,134"},"question":{"kind":"string","value":"
I have a problem with some zombie-like processes on a certain server that need to be killed every now and then. How can I best identify the ones that have run for longer than an hour or so?
Using ps is the right way. I've already done something similar before but don't have the source handy.\nGenerally - ps has an option to tell it which fields to show and by which to sort. You can sort the output by running time, grep the process you want and then kill it.
will give you the answer, but it drops down to day-precision which might not be as useful.
\n\n
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND\nroot 1 0.0 0.0 7200 308 ? Ss Jun22 0:02 init [5]\nroot 2 0.0 0.0 0 0 ? S Jun22 0:02 [migration/0]\nroot 3 0.0 0.0 0 0 ? SN Jun22 0:18 [ksoftirqd/0]\nroot 4 0.0 0.0 0 0 ? S Jun22 0:00 [watchdog/0]\n
\n\n
If you're on linux or another system with the /proc filesystem, In this example, you can only see that process 1 has been running since June 22, but no indication of the time it was started.
\n\n
stat /proc/<pid>\n
\n\n
will give you a more precise answer. For example, here's an exact timestamp for process 1, which ps shows only as Jun22:
(Where user-id is a specific user's ID with long-running processes.)
\n\n
The second regular expression matches the a time that has an optional days figure, followed by an hour, minute, and second component, and so is at least one hour in length.
do a ps -aef. this will show you the time at which the process started. Then using the date command find the current time. Calculate the difference between the two to find the age of the process.
\n"},{"answer_id":3474710,"author":"Peter V. Mørch","author_id":345716,"author_profile":"https://Stackoverflow.com/users/345716","pm_score":3,"selected":false,"text":"
I did something similar to the accepted answer but slightly differently since I want to match based on process name and based on the bad process running for more than 100 seconds
The -i flag will prompt you with yes/no for each process match.
\n"},{"answer_id":11042931,"author":"Rafael S. Calsaverini","author_id":114388,"author_profile":"https://Stackoverflow.com/users/114388","pm_score":2,"selected":false,"text":"
You can use bc to join the two commands in mob's answer and get how many seconds ellapsed since the process started:
where patterns is a string or extended regular expression, it will print out all processes matching this pattern and the seconds since they started. :)
Jodie C and others have pointed out that killall -i can be used, which is fine if you want to use the process name to kill. But if you want to kill by the same parameters as pgrep -f, you need to use something like the following, using pure bash and the /proc filesystem.
\n\n
#!/bin/sh \n\nmax_age=120 # (seconds) \nnaughty=\"$(pgrep -f offlineimap)\" \nif [[ -n \"$naughty\" ]]; then # naughty is running \n age_in_seconds=$(echo \"$(date +%s) - $(stat -c %X /proc/$naughty)\" | bc) \n if [[ \"$age_in_seconds\" -ge \"$max_age\" ]]; then # naughty is too old! \n kill -s 9 \"$naughty\" \n fi \nfi \n
\n\n
This lets you find and kill processes older than max_age seconds using the full process name; i.e., the process named /usr/bin/python2 offlineimap can be killed by reference to \"offlineimap\", whereas the killall solutions presented here will only work on the string \"python2\".
This reverses the output fields: elapsed time first, full command including arguments second. This is preferred because the full command may contain spaces.
Using ps is the right way. I've already done something similar before but don't have the source handy.\\nGenerally - ps has an option to tell it which fields to show and by which to sort. You can sort the output by running time, grep the process you want and then kill it.
will give you the answer, but it drops down to day-precision which might not be as useful.
\\n\\n
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND\\nroot 1 0.0 0.0 7200 308 ? Ss Jun22 0:02 init [5]\\nroot 2 0.0 0.0 0 0 ? S Jun22 0:02 [migration/0]\\nroot 3 0.0 0.0 0 0 ? SN Jun22 0:18 [ksoftirqd/0]\\nroot 4 0.0 0.0 0 0 ? S Jun22 0:00 [watchdog/0]\\n
\\n\\n
If you're on linux or another system with the /proc filesystem, In this example, you can only see that process 1 has been running since June 22, but no indication of the time it was started.
\\n\\n
stat /proc/<pid>\\n
\\n\\n
will give you a more precise answer. For example, here's an exact timestamp for process 1, which ps shows only as Jun22:
(Where user-id is a specific user's ID with long-running processes.)
\\n\\n
The second regular expression matches the a time that has an optional days figure, followed by an hour, minute, and second component, and so is at least one hour in length.
do a ps -aef. this will show you the time at which the process started. Then using the date command find the current time. Calculate the difference between the two to find the age of the process.
I did something similar to the accepted answer but slightly differently since I want to match based on process name and based on the bad process running for more than 100 seconds
where patterns is a string or extended regular expression, it will print out all processes matching this pattern and the seconds since they started. :)
Jodie C and others have pointed out that killall -i can be used, which is fine if you want to use the process name to kill. But if you want to kill by the same parameters as pgrep -f, you need to use something like the following, using pure bash and the /proc filesystem.
\\n\\n
#!/bin/sh \\n\\nmax_age=120 # (seconds) \\nnaughty=\\\"$(pgrep -f offlineimap)\\\" \\nif [[ -n \\\"$naughty\\\" ]]; then # naughty is running \\n age_in_seconds=$(echo \\\"$(date +%s) - $(stat -c %X /proc/$naughty)\\\" | bc) \\n if [[ \\\"$age_in_seconds\\\" -ge \\\"$max_age\\\" ]]; then # naughty is too old! \\n kill -s 9 \\\"$naughty\\\" \\n fi \\nfi \\n
\\n\\n
This lets you find and kill processes older than max_age seconds using the full process name; i.e., the process named /usr/bin/python2 offlineimap can be killed by reference to \\\"offlineimap\\\", whereas the killall solutions presented here will only work on the string \\\"python2\\\".
This reverses the output fields: elapsed time first, full command including arguments second. This is preferred because the full command may contain spaces.
\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/08"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/6134","https://Stackoverflow.com","https://Stackoverflow.com/users/726/"],"string":"[\n \"https://Stackoverflow.com/questions/6134\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/726/\"\n]"},"input":{"kind":"string","value":"I have a problem with some zombie-like processes on a certain server that need to be killed every now and then. How can I best identify the ones that have run for longer than an hour or so?"},"output":{"kind":"string","value":"If they just need to be killed:\n\n```\nif [[ \"$(uname)\" = \"Linux\" ]];then killall --older-than 1h someprocessname;fi\n\n```\n\nIf you want to see what it's matching\n\n```\nif [[ \"$(uname)\" = \"Linux\" ]];then killall -i --older-than 1h someprocessname;fi\n\n```\n\nThe `-i` flag will prompt you with yes/no for each process match."}}},{"rowIdx":490,"cells":{"qid":{"kind":"number","value":6155,"string":"6,155"},"question":{"kind":"string","value":"
What kinds of hook scripts are people using for Subversion? Just general ideas but code would be great too!
The most common one I think is to allow people to change revision comments after comitting.
\n\n
You need to enable the 'pre-revprop-change' hook script to allow that. The example provided, if enabled allows editing only the comment property and only be the original comitter. Great for correcting typos.
In my work place we've set up a post-commit hook that generates RSS feeds that are displayed in various dash boards and are used for code reviewers to know when it is time to review and for us to see that new employees are committing enough.
I'm using post-commit hooks (I think it's this one) to post a message to a forum on Basecamp for each commit. Two advantages:
\n\n\n
As the lead developer, I get a roll-up of commits every morning (via the RSS feed from that basecamp forum) and can see what my team has been up to pretty quickly.
\n
Our Trac/SVN install is behind our firewall, so this gives my higher-ups in other locations a window into what we're doing. They might not understand it, but to a manager a lot of activity looks like, well, a lot of activity ;)
\n\n\n
I guess the end result of this is similar to what @Aviv is doing.
\n\n
I'm looking into solutions for building the latest commit on a separate server for continuous integration, but I'm going to have to change the way we make changes to our database schema before that will work.
A hook to notify the bug/issue management system of changes to repository. Ie. the commit message has issue:546 or similar tag in it that is parsed and fed to the bug management system.
\n"},{"answer_id":27003,"author":"Sir Rippov the Maple","author_id":2822,"author_profile":"https://Stackoverflow.com/users/2822","pm_score":0,"selected":false,"text":"
We check the following with our hook scripts:
\n\n
\n
That a commit log message has been supplied
\n
That a reviewer has been specified for the commit
\n
That no automatically generated code or banned file types land up in the repository
\n
Send an email out when a branch / tag is created
\n
\n\n
We still want to implement the following:
\n\n
\n
Send an email when a user acquires a lock on a file
\n
Send an email when your lock has been stolen
\n
Send an email to everyone when a revision property has been changed
We use FogBugz for bug tracking, it provides subversion commit scripts that allow you to include a case number in your check in comments and then associates the bug with the check in that fixed it. It does require a WebSVN instance to be set up so that you have a web based viewer for your repository.
We use a commit hook script to trigger our release robot. Writing new release information to a file named changes.txt in our different products will trigger the creation of a tag and the relevant artifacts.
I am using the pre-revprop-change hook that allows me to actually go back and edit comments and such information after the commit has been performed. This is very useful if there is missing/erroneous information in the commit comments.
\n\n
Here I post a pre-revprop-change.bat batch file for Windows NT or later. You\ncan certainly enhance it with more modifications. You can also derive a\npost-revprop-change.cmd from it to back up the old snv:log somewhere or just to append it to the new log.
\n\n
The only tricky part was to be able to actually parse the stdin from\nthe batch file. This is done here with the FIND.EXE command.
\n\n
The other thing is that I have had reports from other users of issues with the use of the /b with the exit command. You may just need to remove that /b in your specific application if error cases do not behave well.
\n\n
@ECHO OFF\n\nset repos=%1\nset rev=%2\nset user=%3\nset propname=%4\nset action=%5\n\n::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::\n:: Only allow changes to svn:log. The author, date and other revision\n:: properties cannot be changed\n::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::\nif /I not '%propname%'=='svn:log' goto ERROR_PROPNAME\n\n::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::\n:: Only allow modifications to svn:log (no addition/overwrite or deletion)\n::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::\nif /I not '%action%'=='M' goto ERROR_ACTION\n\n::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::\n:: Make sure that the new svn:log message contains some text.\n::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::\nset bIsEmpty=true\nfor /f \"tokens=*\" %%g in ('find /V \"\"') do (\n set bIsEmpty=false\n)\nif '%bIsEmpty%'=='true' goto ERROR_EMPTY\n\ngoto :eof\n\n\n\n:ERROR_EMPTY\necho Empty svn:log properties are not allowed. >&2\ngoto ERROR_EXIT\n\n:ERROR_PROPNAME\necho Only changes to svn:log revision properties are allowed. >&2\ngoto ERROR_EXIT\n\n:ERROR_ACTION\necho Only modifications to svn:log revision properties are allowed. >&2\ngoto ERROR_EXIT\n\n:ERROR_EXIT\nexit /b 1 \n
Windows pre-commit hook to check that log contains something.
\n\n
@ECHO OFF\nsetlocal\n\n:::::::::::::::::::::::::::::::::::::::::::::::::::::::::\n:: Get subversion arguments\nset repos=%~1\nset txn=%2\n\n:::::::::::::::::::::::::::::::::::::::::::::::::::::::::\n:: Set some variables\nset svnlookparam=\"%repos%\" -t %txn%\n\n::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::\n:: Make sure that the new svn:log message contains some text.\nset bIsEmpty=true\nfor /f \"tokens=* usebackq\" %%g in (`svnlook log %svnlookparam%`) do (\n set bIsEmpty=false\n)\nif '%bIsEmpty%'=='true' goto ERROR_EMPTY\n\necho Allowed. >&2\n\ngoto :END\n\n\n:ERROR_EMPTY\necho Empty log messages are not allowed. >&2\ngoto ERROR_EXIT\n\n:ERROR_EXIT\n:: You may require to remove the /b below if your hook is called directly by subversion\nexit /b 1\n\n:END\nendlocal\n
post-commit hook to send email notification that something changed in the repository to a list of emails. You need sendmail.exe in the same folder than your hook file, along with sendmail.ini.
\n\n
You also need a file post-commit.tos.txt next to your post-commit.cmd to list the mail recipients. The file should contain:
I forgot to enter a comment while committing. Didn't have time to figure out why my pre-revprop-change hook wasn't working. So the following svnadmin command worked for me to enter a commit message: \n svnadmin setlog <filesystem path to my repository> --bypass-hooks -r 117 junk,\nwhere \"junk\" is the file containing the text which I wanted to be the comment. svn setlog help has more usage info...
The most common one I think is to allow people to change revision comments after comitting.
\\n\\n
You need to enable the 'pre-revprop-change' hook script to allow that. The example provided, if enabled allows editing only the comment property and only be the original comitter. Great for correcting typos.
In my work place we've set up a post-commit hook that generates RSS feeds that are displayed in various dash boards and are used for code reviewers to know when it is time to review and for us to see that new employees are committing enough.
I'm using post-commit hooks (I think it's this one) to post a message to a forum on Basecamp for each commit. Two advantages:
\\n\\n\\n
As the lead developer, I get a roll-up of commits every morning (via the RSS feed from that basecamp forum) and can see what my team has been up to pretty quickly.
\\n
Our Trac/SVN install is behind our firewall, so this gives my higher-ups in other locations a window into what we're doing. They might not understand it, but to a manager a lot of activity looks like, well, a lot of activity ;)
\\n\\n\\n
I guess the end result of this is similar to what @Aviv is doing.
\\n\\n
I'm looking into solutions for building the latest commit on a separate server for continuous integration, but I'm going to have to change the way we make changes to our database schema before that will work.
A hook to notify the bug/issue management system of changes to repository. Ie. the commit message has issue:546 or similar tag in it that is parsed and fed to the bug management system.
We use FogBugz for bug tracking, it provides subversion commit scripts that allow you to include a case number in your check in comments and then associates the bug with the check in that fixed it. It does require a WebSVN instance to be set up so that you have a web based viewer for your repository.
We use a commit hook script to trigger our release robot. Writing new release information to a file named changes.txt in our different products will trigger the creation of a tag and the relevant artifacts.
I am using the pre-revprop-change hook that allows me to actually go back and edit comments and such information after the commit has been performed. This is very useful if there is missing/erroneous information in the commit comments.
\\n\\n
Here I post a pre-revprop-change.bat batch file for Windows NT or later. You\\ncan certainly enhance it with more modifications. You can also derive a\\npost-revprop-change.cmd from it to back up the old snv:log somewhere or just to append it to the new log.
\\n\\n
The only tricky part was to be able to actually parse the stdin from\\nthe batch file. This is done here with the FIND.EXE command.
\\n\\n
The other thing is that I have had reports from other users of issues with the use of the /b with the exit command. You may just need to remove that /b in your specific application if error cases do not behave well.
\\n\\n
@ECHO OFF\\n\\nset repos=%1\\nset rev=%2\\nset user=%3\\nset propname=%4\\nset action=%5\\n\\n::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::\\n:: Only allow changes to svn:log. The author, date and other revision\\n:: properties cannot be changed\\n::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::\\nif /I not '%propname%'=='svn:log' goto ERROR_PROPNAME\\n\\n::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::\\n:: Only allow modifications to svn:log (no addition/overwrite or deletion)\\n::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::\\nif /I not '%action%'=='M' goto ERROR_ACTION\\n\\n::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::\\n:: Make sure that the new svn:log message contains some text.\\n::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::\\nset bIsEmpty=true\\nfor /f \\\"tokens=*\\\" %%g in ('find /V \\\"\\\"') do (\\n set bIsEmpty=false\\n)\\nif '%bIsEmpty%'=='true' goto ERROR_EMPTY\\n\\ngoto :eof\\n\\n\\n\\n:ERROR_EMPTY\\necho Empty svn:log properties are not allowed. >&2\\ngoto ERROR_EXIT\\n\\n:ERROR_PROPNAME\\necho Only changes to svn:log revision properties are allowed. >&2\\ngoto ERROR_EXIT\\n\\n:ERROR_ACTION\\necho Only modifications to svn:log revision properties are allowed. >&2\\ngoto ERROR_EXIT\\n\\n:ERROR_EXIT\\nexit /b 1 \\n
Windows pre-commit hook to check that log contains something.
\\n\\n
@ECHO OFF\\nsetlocal\\n\\n:::::::::::::::::::::::::::::::::::::::::::::::::::::::::\\n:: Get subversion arguments\\nset repos=%~1\\nset txn=%2\\n\\n:::::::::::::::::::::::::::::::::::::::::::::::::::::::::\\n:: Set some variables\\nset svnlookparam=\\\"%repos%\\\" -t %txn%\\n\\n::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::\\n:: Make sure that the new svn:log message contains some text.\\nset bIsEmpty=true\\nfor /f \\\"tokens=* usebackq\\\" %%g in (`svnlook log %svnlookparam%`) do (\\n set bIsEmpty=false\\n)\\nif '%bIsEmpty%'=='true' goto ERROR_EMPTY\\n\\necho Allowed. >&2\\n\\ngoto :END\\n\\n\\n:ERROR_EMPTY\\necho Empty log messages are not allowed. >&2\\ngoto ERROR_EXIT\\n\\n:ERROR_EXIT\\n:: You may require to remove the /b below if your hook is called directly by subversion\\nexit /b 1\\n\\n:END\\nendlocal\\n
post-commit hook to send email notification that something changed in the repository to a list of emails. You need sendmail.exe in the same folder than your hook file, along with sendmail.ini.
\\n\\n
You also need a file post-commit.tos.txt next to your post-commit.cmd to list the mail recipients. The file should contain:
I forgot to enter a comment while committing. Didn't have time to figure out why my pre-revprop-change hook wasn't working. So the following svnadmin command worked for me to enter a commit message: \\n svnadmin setlog <filesystem path to my repository> --bypass-hooks -r 117 junk,\\nwhere \\\"junk\\\" is the file containing the text which I wanted to be the comment. svn setlog help has more usage info...
\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/08"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/6155","https://Stackoverflow.com","https://Stackoverflow.com/users/204/"],"string":"[\n \"https://Stackoverflow.com/questions/6155\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/204/\"\n]"},"input":{"kind":"string","value":"What kinds of hook scripts are people using for Subversion? Just general ideas but code would be great too!"},"output":{"kind":"string","value":"I am using the `pre-revprop-change` hook that allows me to actually go back and edit comments and such information after the commit has been performed. This is very useful if there is missing/erroneous information in the commit comments.\n\nHere I post a `pre-revprop-change.bat` batch file for Windows NT or later. You\ncan certainly enhance it with more modifications. You can also derive a\n`post-revprop-change.cmd` from it to back up the old `snv:log` somewhere or just to append it to the new log.\n\nThe only tricky part was to be able to actually parse the stdin from\nthe batch file. This is done here with the `FIND.EXE` command.\n\nThe other thing is that I have had reports from other users of issues with the use of the `/b` with the `exit` command. You may just need to remove that `/b` in your specific application if error cases do not behave well.\n\n```\n@ECHO OFF\n\nset repos=%1\nset rev=%2\nset user=%3\nset propname=%4\nset action=%5\n\n::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::\n:: Only allow changes to svn:log. The author, date and other revision\n:: properties cannot be changed\n::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::\nif /I not '%propname%'=='svn:log' goto ERROR_PROPNAME\n\n::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::\n:: Only allow modifications to svn:log (no addition/overwrite or deletion)\n::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::\nif /I not '%action%'=='M' goto ERROR_ACTION\n\n::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::\n:: Make sure that the new svn:log message contains some text.\n::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::\nset bIsEmpty=true\nfor /f \"tokens=*\" %%g in ('find /V \"\"') do (\n set bIsEmpty=false\n)\nif '%bIsEmpty%'=='true' goto ERROR_EMPTY\n\ngoto :eof\n\n:ERROR_EMPTY\necho Empty svn:log properties are not allowed. >&2\ngoto ERROR_EXIT\n\n:ERROR_PROPNAME\necho Only changes to svn:log revision properties are allowed. >&2\ngoto ERROR_EXIT\n\n:ERROR_ACTION\necho Only modifications to svn:log revision properties are allowed. >&2\ngoto ERROR_EXIT\n\n:ERROR_EXIT\nexit /b 1 \n\n```"}}},{"rowIdx":491,"cells":{"qid":{"kind":"number","value":6173,"string":"6,173"},"question":{"kind":"string","value":"
I'm looking for a .NET regular expression extract all the URLs from a webpage but haven't found one to be comprehensive enough to cover all the different ways you can specify a link.
\n\n
And a side question:
\n\n
Is there one regex to rule them all? Or am I better off using a series of less complicated regular expressions and just using mutliple passes against the raw HTML? (Speed vs. Maintainability)
Look at the URI specification. That could help you a lot. And as far as performance goes, you can pretty much extract all the HTTP links in a modest web page. When I say modest I definitely do not mean one page all encompassing HTML manuals like that of ELisp manual. Also performance is a touchy topic. My advice would be to measure your performance and then decide if you are going to extract all the links using one single regex or with multiple simpler regex expressions.
I don't have time to try and think of a regex that probably won't work, but I wanted to comment that you should most definitely break up your regex, at least if it gets to this level of ugliness:
All links, including relative ones, that are called by href or src.
\n\n
#Matches things in single or double quotes, but not the quotes themselves\n(?<=([\"']))((?<=href=['\"])|(?<=src=['\"])).*?(?=\\1)\n\n#Maches thing in either double or single quotes, including the quotes.\n([\"'])((?<=href=\")|(?<=src=\")).*?\\1\n
\n\n
The second one will only get you links that use double quotes, however.
HtmlDocument doc = new HtmlDocument();\ndoc.Load(\"file.htm\");\nforeach(HtmlNode link in doc.DocumentElement.SelectNodes(\"//a@href\")\n{\nResponse.Write(link[\"href\"].Value);\n}\ndoc.Save(\"file.htm\");\n
The final character class makes sure that if an URL is part of some text, punctuation such as a comma or full stop after the URL is not interpreted as part of the URL.
Look at the URI specification. That could help you a lot. And as far as performance goes, you can pretty much extract all the HTTP links in a modest web page. When I say modest I definitely do not mean one page all encompassing HTML manuals like that of ELisp manual. Also performance is a touchy topic. My advice would be to measure your performance and then decide if you are going to extract all the links using one single regex or with multiple simpler regex expressions.
I don't have time to try and think of a regex that probably won't work, but I wanted to comment that you should most definitely break up your regex, at least if it gets to this level of ugliness:
All links, including relative ones, that are called by href or src.
\\n\\n
#Matches things in single or double quotes, but not the quotes themselves\\n(?<=([\\\"']))((?<=href=['\\\"])|(?<=src=['\\\"])).*?(?=\\\\1)\\n\\n#Maches thing in either double or single quotes, including the quotes.\\n([\\\"'])((?<=href=\\\")|(?<=src=\\\")).*?\\\\1\\n
\\n\\n
The second one will only get you links that use double quotes, however.
HtmlDocument doc = new HtmlDocument();\\ndoc.Load(\\\"file.htm\\\");\\nforeach(HtmlNode link in doc.DocumentElement.SelectNodes(\\\"//a@href\\\")\\n{\\nResponse.Write(link[\\\"href\\\"].Value);\\n}\\ndoc.Save(\\\"file.htm\\\");\\n
The final character class makes sure that if an URL is part of some text, punctuation such as a comma or full stop after the URL is not interpreted as part of the URL.
\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/08"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/6173","https://Stackoverflow.com","https://Stackoverflow.com/users/322/"],"string":"[\n \"https://Stackoverflow.com/questions/6173\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/322/\"\n]"},"input":{"kind":"string","value":"I'm looking for a .NET regular expression extract all the URLs from a webpage but haven't found one to be comprehensive enough to cover all the different ways you can specify a link. \n\nAnd a side question:\n\nIs there **one regex to rule them all**? Or am I better off using a series of less complicated regular expressions and just using mutliple passes against the raw HTML? (Speed vs. Maintainability)"},"output":{"kind":"string","value":"```\n((mailto\\:|(news|(ht|f)tp(s?))\\://){1}\\S+)\n\n```\n\nI took this from [regexlib.com](http://regexlib.com/Search.aspx?k=URL)\n\n[editor's note: the {1} has no real function in this regex; [see this post](https://stackoverflow.com/questions/13470/question-about-specific-regular-expression)]"}}},{"rowIdx":492,"cells":{"qid":{"kind":"number","value":6184,"string":"6,184"},"question":{"kind":"string","value":"
When you subscribe to an event on an object from within a form, you are essentially handing over control of your callback method to the event source. You have no idea whether that event source will choose to trigger the event on a different thread.
\n\n
The problem is that when the callback is invoked, you cannot assume that you can make update controls on your form because sometimes those controls will throw an exception if the event callback was called on a thread different than the thread the form was run on.
You can't make UI control calls from a different thread than the one they were created on (the form's thread).
\n
Delegate invocations (ie, event hooks) are triggered on the same thread as the object that is firing the event.
\n\n\n
So, if you have a separate \"engine\" thread doing some work and have some UI watching for state changes which can be reflected in the UI (such as a progress bar or whatever), you have a problem. The engine fire's an object changed event which has been hooked by the Form. But the callback delegate that the Form registered with the engine gets called on the engine's thread… not on the Form's thread. And so you can't update any controls from that callback. Doh!
\n\n
BeginInvoke comes to the rescue. Just use this simple coding model in all your callback methods and you can be sure that things are going to be okay:
\n\n
private delegate void EventArgsDelegate(object sender, EventArgs ea);\n\nvoid SomethingHappened(object sender, EventArgs ea)\n{\n //\n // Make sure this callback is on the correct thread\n //\n if (this.InvokeRequired)\n {\n this.Invoke(new EventArgsDelegate(SomethingHappened), new object[] { sender, ea });\n return;\n }\n\n //\n // Do something with the event such as update a control\n //\n textBox1.Text = \"Something happened\";\n}\n
\n\n
It's quite simple really.
\n\n\n
Use InvokeRequired to find out if this callback happened on the correct thread.
\n
If not, then reinvoke the callback on the correct thread with the same parameters. You can reinvoke a method by using the Invoke (blocking) or BeginInvoke (non-blocking) methods.
\n
The next time the function is called, InvokeRequired returns false because we are now on the correct thread and everybody is happy.
\n\n\n
This is a very compact way of addressing this problem and making your Forms safe from multi-threaded event callbacks.
To simplify Simon's code a bit, you could use the built in generic Action delegate. It saves peppering your code with a bunch of delegate types you don't really need. Also, in .NET 3.5 they added a params parameter to the Invoke method so you don't have to define a temporary array.
I'm a bit late to this topic, but you might want to take a look at the Event-Based Asynchronous Pattern. When implemented properly, it guarantees that events are always raised from the UI thread.
\n\n
Here's a brief example that only allows one concurrent invocation; supporting multiple invocations/events requires a little bit more plumbing.
\n\n
using System;\nusing System.ComponentModel;\nusing System.Threading;\nusing System.Windows.Forms;\n\nnamespace WindowsFormsApplication1\n{\n public class MainForm : Form\n {\n private TypeWithAsync _type;\n\n [STAThread()]\n public static void Main()\n {\n Application.EnableVisualStyles();\n Application.Run(new MainForm());\n }\n\n public MainForm()\n {\n _type = new TypeWithAsync();\n _type.DoSomethingCompleted += DoSomethingCompleted;\n\n var panel = new FlowLayoutPanel() { Dock = DockStyle.Fill };\n\n var btn = new Button() { Text = \"Synchronous\" };\n btn.Click += SyncClick;\n panel.Controls.Add(btn);\n\n btn = new Button { Text = \"Asynchronous\" };\n btn.Click += AsyncClick;\n panel.Controls.Add(btn);\n\n Controls.Add(panel);\n }\n\n private void SyncClick(object sender, EventArgs e)\n {\n int value = _type.DoSomething();\n MessageBox.Show(string.Format(\"DoSomething() returned {0}.\", value));\n }\n\n private void AsyncClick(object sender, EventArgs e)\n {\n _type.DoSomethingAsync();\n }\n\n private void DoSomethingCompleted(object sender, DoSomethingCompletedEventArgs e)\n {\n MessageBox.Show(string.Format(\"DoSomethingAsync() returned {0}.\", e.Value));\n }\n }\n\n class TypeWithAsync\n {\n private AsyncOperation _operation;\n\n // synchronous version of method\n public int DoSomething()\n {\n Thread.Sleep(5000);\n return 27;\n }\n\n // async version of method\n public void DoSomethingAsync()\n {\n if (_operation != null)\n {\n throw new InvalidOperationException(\"An async operation is already running.\");\n }\n\n _operation = AsyncOperationManager.CreateOperation(null);\n ThreadPool.QueueUserWorkItem(DoSomethingAsyncCore);\n }\n\n // wrapper used by async method to call sync version of method, matches WaitCallback so it\n // can be queued by the thread pool\n private void DoSomethingAsyncCore(object state)\n {\n int returnValue = DoSomething();\n var e = new DoSomethingCompletedEventArgs(returnValue);\n _operation.PostOperationCompleted(RaiseDoSomethingCompleted, e);\n }\n\n // wrapper used so async method can raise the event; matches SendOrPostCallback\n private void RaiseDoSomethingCompleted(object args)\n {\n OnDoSomethingCompleted((DoSomethingCompletedEventArgs)args);\n }\n\n private void OnDoSomethingCompleted(DoSomethingCompletedEventArgs e)\n {\n var handler = DoSomethingCompleted;\n\n if (handler != null) { handler(this, e); }\n }\n\n public EventHandler<DoSomethingCompletedEventArgs> DoSomethingCompleted;\n }\n\n public class DoSomethingCompletedEventArgs : EventArgs\n {\n private int _value;\n\n public DoSomethingCompletedEventArgs(int value)\n : base()\n {\n _value = value;\n }\n\n public int Value\n {\n get { return _value; }\n }\n }\n}\n
You can't make UI control calls from a different thread than the one they were created on (the form's thread).
\\n
Delegate invocations (ie, event hooks) are triggered on the same thread as the object that is firing the event.
\\n\\n\\n
So, if you have a separate \\\"engine\\\" thread doing some work and have some UI watching for state changes which can be reflected in the UI (such as a progress bar or whatever), you have a problem. The engine fire's an object changed event which has been hooked by the Form. But the callback delegate that the Form registered with the engine gets called on the engine's thread… not on the Form's thread. And so you can't update any controls from that callback. Doh!
\\n\\n
BeginInvoke comes to the rescue. Just use this simple coding model in all your callback methods and you can be sure that things are going to be okay:
\\n\\n
private delegate void EventArgsDelegate(object sender, EventArgs ea);\\n\\nvoid SomethingHappened(object sender, EventArgs ea)\\n{\\n //\\n // Make sure this callback is on the correct thread\\n //\\n if (this.InvokeRequired)\\n {\\n this.Invoke(new EventArgsDelegate(SomethingHappened), new object[] { sender, ea });\\n return;\\n }\\n\\n //\\n // Do something with the event such as update a control\\n //\\n textBox1.Text = \\\"Something happened\\\";\\n}\\n
\\n\\n
It's quite simple really.
\\n\\n\\n
Use InvokeRequired to find out if this callback happened on the correct thread.
\\n
If not, then reinvoke the callback on the correct thread with the same parameters. You can reinvoke a method by using the Invoke (blocking) or BeginInvoke (non-blocking) methods.
\\n
The next time the function is called, InvokeRequired returns false because we are now on the correct thread and everybody is happy.
\\n\\n\\n
This is a very compact way of addressing this problem and making your Forms safe from multi-threaded event callbacks.
To simplify Simon's code a bit, you could use the built in generic Action delegate. It saves peppering your code with a bunch of delegate types you don't really need. Also, in .NET 3.5 they added a params parameter to the Invoke method so you don't have to define a temporary array.
I'm a bit late to this topic, but you might want to take a look at the Event-Based Asynchronous Pattern. When implemented properly, it guarantees that events are always raised from the UI thread.
\\n\\n
Here's a brief example that only allows one concurrent invocation; supporting multiple invocations/events requires a little bit more plumbing.
\\n\\n
using System;\\nusing System.ComponentModel;\\nusing System.Threading;\\nusing System.Windows.Forms;\\n\\nnamespace WindowsFormsApplication1\\n{\\n public class MainForm : Form\\n {\\n private TypeWithAsync _type;\\n\\n [STAThread()]\\n public static void Main()\\n {\\n Application.EnableVisualStyles();\\n Application.Run(new MainForm());\\n }\\n\\n public MainForm()\\n {\\n _type = new TypeWithAsync();\\n _type.DoSomethingCompleted += DoSomethingCompleted;\\n\\n var panel = new FlowLayoutPanel() { Dock = DockStyle.Fill };\\n\\n var btn = new Button() { Text = \\\"Synchronous\\\" };\\n btn.Click += SyncClick;\\n panel.Controls.Add(btn);\\n\\n btn = new Button { Text = \\\"Asynchronous\\\" };\\n btn.Click += AsyncClick;\\n panel.Controls.Add(btn);\\n\\n Controls.Add(panel);\\n }\\n\\n private void SyncClick(object sender, EventArgs e)\\n {\\n int value = _type.DoSomething();\\n MessageBox.Show(string.Format(\\\"DoSomething() returned {0}.\\\", value));\\n }\\n\\n private void AsyncClick(object sender, EventArgs e)\\n {\\n _type.DoSomethingAsync();\\n }\\n\\n private void DoSomethingCompleted(object sender, DoSomethingCompletedEventArgs e)\\n {\\n MessageBox.Show(string.Format(\\\"DoSomethingAsync() returned {0}.\\\", e.Value));\\n }\\n }\\n\\n class TypeWithAsync\\n {\\n private AsyncOperation _operation;\\n\\n // synchronous version of method\\n public int DoSomething()\\n {\\n Thread.Sleep(5000);\\n return 27;\\n }\\n\\n // async version of method\\n public void DoSomethingAsync()\\n {\\n if (_operation != null)\\n {\\n throw new InvalidOperationException(\\\"An async operation is already running.\\\");\\n }\\n\\n _operation = AsyncOperationManager.CreateOperation(null);\\n ThreadPool.QueueUserWorkItem(DoSomethingAsyncCore);\\n }\\n\\n // wrapper used by async method to call sync version of method, matches WaitCallback so it\\n // can be queued by the thread pool\\n private void DoSomethingAsyncCore(object state)\\n {\\n int returnValue = DoSomething();\\n var e = new DoSomethingCompletedEventArgs(returnValue);\\n _operation.PostOperationCompleted(RaiseDoSomethingCompleted, e);\\n }\\n\\n // wrapper used so async method can raise the event; matches SendOrPostCallback\\n private void RaiseDoSomethingCompleted(object args)\\n {\\n OnDoSomethingCompleted((DoSomethingCompletedEventArgs)args);\\n }\\n\\n private void OnDoSomethingCompleted(DoSomethingCompletedEventArgs e)\\n {\\n var handler = DoSomethingCompleted;\\n\\n if (handler != null) { handler(this, e); }\\n }\\n\\n public EventHandler<DoSomethingCompletedEventArgs> DoSomethingCompleted;\\n }\\n\\n public class DoSomethingCompletedEventArgs : EventArgs\\n {\\n private int _value;\\n\\n public DoSomethingCompletedEventArgs(int value)\\n : base()\\n {\\n _value = value;\\n }\\n\\n public int Value\\n {\\n get { return _value; }\\n }\\n }\\n}\\n
\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/08"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/6184","https://Stackoverflow.com","https://Stackoverflow.com/users/756/"],"string":"[\n \"https://Stackoverflow.com/questions/6184\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/756/\"\n]"},"input":{"kind":"string","value":"When you subscribe to an event on an object from within a form, you are essentially handing over control of your callback method to the event source. You have no idea whether that event source will choose to trigger the event on a different thread.\n\nThe problem is that when the callback is invoked, you cannot assume that you can make update controls on your form because sometimes those controls will throw an exception if the event callback was called on a thread different than the thread the form was run on."},"output":{"kind":"string","value":"To simplify Simon's code a bit, you could use the built in generic Action delegate. It saves peppering your code with a bunch of delegate types you don't really need. Also, in .NET 3.5 they added a params parameter to the Invoke method so you don't have to define a temporary array.\n\n```\nvoid SomethingHappened(object sender, EventArgs ea)\n{\n if (InvokeRequired)\n {\n Invoke(new Action