🎉 Free Image Generator Now Available!
Totally Free + Zero Barriers + No Login Required
Totally Free + Zero Barriers + No Login Required
\n\n```"}}},{"rowIdx":382,"cells":{"qid":{"kind":"number","value":3400,"string":"3,400"},"question":{"kind":"string","value":"
Note: I am using SQL's Full-text search capabilities, CONTAINS clauses and all - the * is the wildcard in full-text, % is for LIKE clauses only.
\n\n
I've read in several places now that \"leading wildcard\" searches (e.g. using \"*overflow\" to match \"stackoverflow\") is not supported in MS SQL. I'm considering using a CLR function to add regex matching, but I'm curious to see what other solutions people might have.
\n\n
More Info: You can add the asterisk only at the end of the word or phrase. - along with my empirical experience: When matching \"myvalue\", \"my*\" works, but \"(asterisk)value\" returns no match, when doing a query as simple as:
\n\n
SELECT * FROM TABLENAME WHERE CONTAINS(TextColumn, '\"*searchterm\"');\n\n\n
Thus, my need for a workaround. I'm only using search in my site on an actual search page - so it needs to work basically the same way that Google works (in the eyes on a Joe Sixpack-type user). Not nearly as complicated, but this sort of match really shouldn't fail.
\n"},"answers":{"kind":"list like","value":[{"answer_id":3405,"author":"Kev","author_id":419,"author_profile":"https://Stackoverflow.com/users/419","pm_score":2,"selected":false,"text":"
Note: this was the answer I submitted for the original version #1 of the question before the CONTAINS keyword was introduced in revision #2. It's still factually accurate.
\n
The wildcard character in SQL Server is the % sign and it works just fine, leading, trailing or otherwise.
\n
That said, if you're going to be doing any kind of serious full text searching then I'd consider utilising the Full Text Index capabilities. Using % and _ wild cards will cause your database to take a serious performance hit.
\n"},{"answer_id":3413,"author":"GateKiller","author_id":383,"author_profile":"https://Stackoverflow.com/users/383","pm_score":-1,"selected":false,"text":"
% Matches any number of characters\n _ Matches a single character
\n\n
I've never used Full-Text indexing but you can accomplish rather complex and fast search queries with simply using the build in T-SQL string functions.
\n"},{"answer_id":3422,"author":"Dave Ward","author_id":60,"author_profile":"https://Stackoverflow.com/users/60","pm_score":2,"selected":false,"text":"
One thing worth keeping in mind is that leading wildcard queries come at a significant performance premium, compared to other wildcard usages.
\n"},{"answer_id":3427,"author":"Michael Stum","author_id":91,"author_profile":"https://Stackoverflow.com/users/91","pm_score":4,"selected":false,"text":"
The problem with leading Wildcards: They cannot be indexed, hence you're doing a full table scan.
\n"},{"answer_id":3521,"author":"Otto","author_id":519,"author_profile":"https://Stackoverflow.com/users/519","pm_score":-1,"selected":false,"text":"
From SQL Server Books Online:
\n\n
\nTo write full-text queries in\n Microsoft SQL Server 2005, you must\n learn how to use the CONTAINS and\n FREETEXT Transact-SQL predicates, and\n the CONTAINSTABLE and FREETEXTTABLE\n rowset-valued functions.
\n
\n\n
That means all of the queries written above with the % and _ are not valid full text queries.
\n\n
Here is a sample of what a query looks like when calling the CONTAINSTABLE function.
\n\n
\nSELECT RANK , * FROM TableName ,\n CONTAINSTABLE (TableName, *, '\n \"*WildCard\" ') searchTable WHERE\n [KEY] = TableName.pk ORDER BY \n searchTable.RANK DESC
\n
\n\n
In order for the CONTAINSTABLE function to know that I'm using a wildcard search, I have to wrap it in double quotes. I can use the wildcard character * at the beginning or ending. There are a lot of other things you can do when you're building the search string for the CONTAINSTABLE function. You can search for a word near another word, search for inflectional words (drive = drives, drove, driving, and driven), and search for synonym of another word (metal can have synonyms such as aluminum and steel).
\n\n
I just created a table, put a full text index on the table and did a couple of test searches and didn't have a problem, so wildcard searching works as intended.
\n\n
[Update]
\n\n
I see that you've updated your question and know that you need to use one of the functions.
\n\n
You can still search with the wildcard at the beginning, but if the word is not a full word following the wildcard, you have to add another wildcard at the end.
\n\n
Example: \"*ildcar\" will look for a single word as long as it ends with \"ildcar\".\n\nExample: \"*ildcar*\" will look for a single word with \"ildcar\" in the middle, which means it will match \"wildcard\". [Just noticed that Markdown removed the wildcard characters from the beginning and ending of my quoted string here.]\n\n\n
[Update #2]
\n\n
Dave Ward - Using a wildcard with one of the functions shouldn't be a huge perf hit. If I created a search string with just \"*\", it will not return all rows, in my test case, it returned 0 records.
\n"},{"answer_id":6078,"author":"Sean Carpenter","author_id":729,"author_profile":"https://Stackoverflow.com/users/729","pm_score":0,"selected":false,"text":"
When it comes to full-text searching, for my money nothing beats Lucene. There is a .Net port available that is compatible with indexes created with the Java version.
\n\n
There's a little work involved in that you have to create/maintain the indexes, but the search speed is fantastic and you can create all sorts of interesting queries. Even indexing speed is pretty good - we just completely rebuild our indexes once a day and don't worry about updating them.
\n\n
As an example, this search functionality is powered by Lucene.Net.
\n"},{"answer_id":68207,"author":"user9569","author_id":9569,"author_profile":"https://Stackoverflow.com/users/9569","pm_score":1,"selected":false,"text":"
Just FYI, Google does not do any substring searches or truncation, right or left. They have a wildcard character * to find unknown words in a phrase, but not a word.
\n\n
Google, along with most full-text search engines, sets up an inverted index based on the alphabetical order of words, with links to their source documents. Binary search is wicked fast, even for huge indexes. But it's really really hard to do a left-truncation in this case, because it loses the advantage of the index.
\n"},{"answer_id":124502,"author":"xnagyg","author_id":2622295,"author_profile":"https://Stackoverflow.com/users/2622295","pm_score":5,"selected":false,"text":"
Workaround only for leading wildcard:
\n\n
find the reversed text with an *
\n\nSELECT * \nFROM TABLENAME \nWHERE CONTAINS(TextColumnREV, '\"mrethcraes*\"');\n\n\n
Of course there are many drawbacks, just for quick workaround...
\n\n
Not to mention CONTAINSTABLE...
\n"},{"answer_id":320132,"author":"Community","author_id":-1,"author_profile":"https://Stackoverflow.com/users/-1","pm_score":4,"selected":false,"text":"
It is possible to use the wildcard \"*\" at the end of the word or phrase (prefix search).
\n\n
For example, this query will find all \"datab\", \"database\", \"databases\" ...
\n\n
SELECT * FROM SomeTable WHERE CONTAINS(ColumnName, '\"datab*\"')\n\n\n
But, unforutnately, it is not possible to search with leading wildcard.
\n\n
For example, this query will not find \"database\"
\n\n
SELECT * FROM SomeTable WHERE CONTAINS(ColumnName, '\"*abase\"')\n\n"},{"answer_id":7678692,"author":"Forrest","author_id":982752,"author_profile":"https://Stackoverflow.com/users/982752","pm_score":2,"selected":false,"text":"
To perhaps add clarity to this thread, from my testing on 2008 R2, Franjo is correct above. When dealing with full text searching, at least when using the CONTAINS phrase, you cannot use a leading , only a trailing functionally. * is the wildcard, not % in full text.
\n\n
Some have suggested that * is ignored. That does not seem to be the case, my results seem to show that the trailing * functionality does work. I think leading * are ignored by the engine.
\n\n
My added problem however is that the same query, with a trailing *, that uses full text with wildcards worked relatively fast on 2005(20 seconds), and slowed to 12 minutes after migrating the db to 2008 R2. It seems at least one other user had similar results and he started a forum post which I added to... FREETEXT works fast still, but something \"seems\" to have changed with the way 2008 processes trailing * in CONTAINS. They give all sorts of warnings in the Upgrade Advisor that they \"improved\" FULL TEXT so your code may break, but unfortunately they do not give you any specific warnings about certain deprecated code etc. ...just a disclaimer that they changed it, use at your own risk.
\n\n
http://social.msdn.microsoft.com/Forums/ar-SA/sqlsearch/thread/7e45b7e4-2061-4c89-af68-febd668f346c
\n\n
Maybe, this is the closest MS hit related to these issues... http://msdn.microsoft.com/en-us/library/ms143709.aspx
\n"},{"answer_id":34602059,"author":"ASP Force","author_id":3464788,"author_profile":"https://Stackoverflow.com/users/3464788","pm_score":1,"selected":false,"text":"
As a parameter in a stored procedure you can use it as:
\n\n
ALTER procedure [dbo].[uspLkp_DrugProductSelectAllByName]\n(\n @PROPRIETARY_NAME varchar(10)\n)\nas\n set nocount on\n declare @PROPRIETARY_NAME2 varchar(10) = '\"' + @PROPRIETARY_NAME + '*\"'\n\n select ldp.*, lkp.DRUG_PKG_ID\n from Lkp_DrugProduct ldp\n left outer join Lkp_DrugPackage lkp on ldp.DRUG_PROD_ID = lkp.DRUG_PROD_ID\n where contains(ldp.PROPRIETARY_NAME, @PROPRIETARY_NAME2)\n\n"},{"answer_id":40114596,"author":"LogicalMan","author_id":6460524,"author_profile":"https://Stackoverflow.com/users/6460524","pm_score":0,"selected":false,"text":"
Perhaps the following link will provide the final answer to this use of wildcards: Performing FTS Wildcard Searches.
\n\n
Note the passage that states: \"However, if you specify “Chain” or “Chain”, you will not get the expected result. The asterisk will be considered as a normal punctuation mark not a wildcard character. \"
\n"},{"answer_id":51138309,"author":"Hans","author_id":9989507,"author_profile":"https://Stackoverflow.com/users/9989507","pm_score":0,"selected":false,"text":"
If you have access to the list of words of the full text search engine, you could do a 'like' search on this list and match the database with the words found, e.g. a table 'words' with following words:
\n\n
pie\n applepie\n spies\n cherrypie\n dog\n cat\n\n\n
To match all words containing 'pie' in this database on a fts table 'full_text' with field 'text':
\n\n
to-match <- SELECT word FROM words WHERE word LIKE '%pie%'\n matcher = \"\"\n a = \"\"\n foreach(m, to-match) {\n matcher += a\n matcher += m\n a = \" OR \"\n }\n SELECT text FROM full_text WHERE text MATCH matcher\n\n"}],"string":"[\n {\n \"answer_id\": 3405,\n \"author\": \"Kev\",\n \"author_id\": 419,\n \"author_profile\": \"https://Stackoverflow.com/users/419\",\n \"pm_score\": 2,\n \"selected\": false,\n \"text\": \"
Note: this was the answer I submitted for the original version #1 of the question before the CONTAINS keyword was introduced in revision #2. It's still factually accurate.
\\n
The wildcard character in SQL Server is the % sign and it works just fine, leading, trailing or otherwise.
\\n
That said, if you're going to be doing any kind of serious full text searching then I'd consider utilising the Full Text Index capabilities. Using % and _ wild cards will cause your database to take a serious performance hit.
\\n\"\n },\n {\n \"answer_id\": 3413,\n \"author\": \"GateKiller\",\n \"author_id\": 383,\n \"author_profile\": \"https://Stackoverflow.com/users/383\",\n \"pm_score\": -1,\n \"selected\": false,\n \"text\": \"
% Matches any number of characters\\n _ Matches a single character
\\n\\n
I've never used Full-Text indexing but you can accomplish rather complex and fast search queries with simply using the build in T-SQL string functions.
\\n\"\n },\n {\n \"answer_id\": 3422,\n \"author\": \"Dave Ward\",\n \"author_id\": 60,\n \"author_profile\": \"https://Stackoverflow.com/users/60\",\n \"pm_score\": 2,\n \"selected\": false,\n \"text\": \"
One thing worth keeping in mind is that leading wildcard queries come at a significant performance premium, compared to other wildcard usages.
\\n\"\n },\n {\n \"answer_id\": 3427,\n \"author\": \"Michael Stum\",\n \"author_id\": 91,\n \"author_profile\": \"https://Stackoverflow.com/users/91\",\n \"pm_score\": 4,\n \"selected\": false,\n \"text\": \"
The problem with leading Wildcards: They cannot be indexed, hence you're doing a full table scan.
\\n\"\n },\n {\n \"answer_id\": 3521,\n \"author\": \"Otto\",\n \"author_id\": 519,\n \"author_profile\": \"https://Stackoverflow.com/users/519\",\n \"pm_score\": -1,\n \"selected\": false,\n \"text\": \"
From SQL Server Books Online:
\\n\\n
\\nTo write full-text queries in\\n Microsoft SQL Server 2005, you must\\n learn how to use the CONTAINS and\\n FREETEXT Transact-SQL predicates, and\\n the CONTAINSTABLE and FREETEXTTABLE\\n rowset-valued functions.
\\n
\\n\\n
That means all of the queries written above with the % and _ are not valid full text queries.
\\n\\n
Here is a sample of what a query looks like when calling the CONTAINSTABLE function.
\\n\\n
\\nSELECT RANK , * FROM TableName ,\\n CONTAINSTABLE (TableName, *, '\\n \\\"*WildCard\\\" ') searchTable WHERE\\n [KEY] = TableName.pk ORDER BY \\n searchTable.RANK DESC
\\n
\\n\\n
In order for the CONTAINSTABLE function to know that I'm using a wildcard search, I have to wrap it in double quotes. I can use the wildcard character * at the beginning or ending. There are a lot of other things you can do when you're building the search string for the CONTAINSTABLE function. You can search for a word near another word, search for inflectional words (drive = drives, drove, driving, and driven), and search for synonym of another word (metal can have synonyms such as aluminum and steel).
\\n\\n
I just created a table, put a full text index on the table and did a couple of test searches and didn't have a problem, so wildcard searching works as intended.
\\n\\n
[Update]
\\n\\n
I see that you've updated your question and know that you need to use one of the functions.
\\n\\n
You can still search with the wildcard at the beginning, but if the word is not a full word following the wildcard, you have to add another wildcard at the end.
\\n\\n
Example: \\\"*ildcar\\\" will look for a single word as long as it ends with \\\"ildcar\\\".\\n\\nExample: \\\"*ildcar*\\\" will look for a single word with \\\"ildcar\\\" in the middle, which means it will match \\\"wildcard\\\". [Just noticed that Markdown removed the wildcard characters from the beginning and ending of my quoted string here.]\\n\\n\\n
[Update #2]
\\n\\n
Dave Ward - Using a wildcard with one of the functions shouldn't be a huge perf hit. If I created a search string with just \\\"*\\\", it will not return all rows, in my test case, it returned 0 records.
\\n\"\n },\n {\n \"answer_id\": 6078,\n \"author\": \"Sean Carpenter\",\n \"author_id\": 729,\n \"author_profile\": \"https://Stackoverflow.com/users/729\",\n \"pm_score\": 0,\n \"selected\": false,\n \"text\": \"
When it comes to full-text searching, for my money nothing beats Lucene. There is a .Net port available that is compatible with indexes created with the Java version.
\\n\\n
There's a little work involved in that you have to create/maintain the indexes, but the search speed is fantastic and you can create all sorts of interesting queries. Even indexing speed is pretty good - we just completely rebuild our indexes once a day and don't worry about updating them.
\\n\\n
As an example, this search functionality is powered by Lucene.Net.
\\n\"\n },\n {\n \"answer_id\": 68207,\n \"author\": \"user9569\",\n \"author_id\": 9569,\n \"author_profile\": \"https://Stackoverflow.com/users/9569\",\n \"pm_score\": 1,\n \"selected\": false,\n \"text\": \"
Just FYI, Google does not do any substring searches or truncation, right or left. They have a wildcard character * to find unknown words in a phrase, but not a word.
\\n\\n
Google, along with most full-text search engines, sets up an inverted index based on the alphabetical order of words, with links to their source documents. Binary search is wicked fast, even for huge indexes. But it's really really hard to do a left-truncation in this case, because it loses the advantage of the index.
\\n\"\n },\n {\n \"answer_id\": 124502,\n \"author\": \"xnagyg\",\n \"author_id\": 2622295,\n \"author_profile\": \"https://Stackoverflow.com/users/2622295\",\n \"pm_score\": 5,\n \"selected\": false,\n \"text\": \"
Workaround only for leading wildcard:
\\n\\n
find the reversed text with an *
\\n\\nSELECT * \\nFROM TABLENAME \\nWHERE CONTAINS(TextColumnREV, '\\\"mrethcraes*\\\"');\\n\\n\\n
Of course there are many drawbacks, just for quick workaround...
\\n\\n
Not to mention CONTAINSTABLE...
\\n\"\n },\n {\n \"answer_id\": 320132,\n \"author\": \"Community\",\n \"author_id\": -1,\n \"author_profile\": \"https://Stackoverflow.com/users/-1\",\n \"pm_score\": 4,\n \"selected\": false,\n \"text\": \"
It is possible to use the wildcard \\\"*\\\" at the end of the word or phrase (prefix search).
\\n\\n
For example, this query will find all \\\"datab\\\", \\\"database\\\", \\\"databases\\\" ...
\\n\\n
SELECT * FROM SomeTable WHERE CONTAINS(ColumnName, '\\\"datab*\\\"')\\n\\n\\n
But, unforutnately, it is not possible to search with leading wildcard.
\\n\\n
For example, this query will not find \\\"database\\\"
\\n\\n
SELECT * FROM SomeTable WHERE CONTAINS(ColumnName, '\\\"*abase\\\"')\\n\\n\"\n },\n {\n \"answer_id\": 7678692,\n \"author\": \"Forrest\",\n \"author_id\": 982752,\n \"author_profile\": \"https://Stackoverflow.com/users/982752\",\n \"pm_score\": 2,\n \"selected\": false,\n \"text\": \"
To perhaps add clarity to this thread, from my testing on 2008 R2, Franjo is correct above. When dealing with full text searching, at least when using the CONTAINS phrase, you cannot use a leading , only a trailing functionally. * is the wildcard, not % in full text.
\\n\\n
Some have suggested that * is ignored. That does not seem to be the case, my results seem to show that the trailing * functionality does work. I think leading * are ignored by the engine.
\\n\\n
My added problem however is that the same query, with a trailing *, that uses full text with wildcards worked relatively fast on 2005(20 seconds), and slowed to 12 minutes after migrating the db to 2008 R2. It seems at least one other user had similar results and he started a forum post which I added to... FREETEXT works fast still, but something \\\"seems\\\" to have changed with the way 2008 processes trailing * in CONTAINS. They give all sorts of warnings in the Upgrade Advisor that they \\\"improved\\\" FULL TEXT so your code may break, but unfortunately they do not give you any specific warnings about certain deprecated code etc. ...just a disclaimer that they changed it, use at your own risk.
\\n\\n
http://social.msdn.microsoft.com/Forums/ar-SA/sqlsearch/thread/7e45b7e4-2061-4c89-af68-febd668f346c
\\n\\n
Maybe, this is the closest MS hit related to these issues... http://msdn.microsoft.com/en-us/library/ms143709.aspx
\\n\"\n },\n {\n \"answer_id\": 34602059,\n \"author\": \"ASP Force\",\n \"author_id\": 3464788,\n \"author_profile\": \"https://Stackoverflow.com/users/3464788\",\n \"pm_score\": 1,\n \"selected\": false,\n \"text\": \"
As a parameter in a stored procedure you can use it as:
\\n\\n
ALTER procedure [dbo].[uspLkp_DrugProductSelectAllByName]\\n(\\n @PROPRIETARY_NAME varchar(10)\\n)\\nas\\n set nocount on\\n declare @PROPRIETARY_NAME2 varchar(10) = '\\\"' + @PROPRIETARY_NAME + '*\\\"'\\n\\n select ldp.*, lkp.DRUG_PKG_ID\\n from Lkp_DrugProduct ldp\\n left outer join Lkp_DrugPackage lkp on ldp.DRUG_PROD_ID = lkp.DRUG_PROD_ID\\n where contains(ldp.PROPRIETARY_NAME, @PROPRIETARY_NAME2)\\n\\n\"\n },\n {\n \"answer_id\": 40114596,\n \"author\": \"LogicalMan\",\n \"author_id\": 6460524,\n \"author_profile\": \"https://Stackoverflow.com/users/6460524\",\n \"pm_score\": 0,\n \"selected\": false,\n \"text\": \"
Perhaps the following link will provide the final answer to this use of wildcards: Performing FTS Wildcard Searches.
\\n\\n
Note the passage that states: \\\"However, if you specify “Chain” or “Chain”, you will not get the expected result. The asterisk will be considered as a normal punctuation mark not a wildcard character. \\\"
\\n\"\n },\n {\n \"answer_id\": 51138309,\n \"author\": \"Hans\",\n \"author_id\": 9989507,\n \"author_profile\": \"https://Stackoverflow.com/users/9989507\",\n \"pm_score\": 0,\n \"selected\": false,\n \"text\": \"
If you have access to the list of words of the full text search engine, you could do a 'like' search on this list and match the database with the words found, e.g. a table 'words' with following words:
\\n\\n
pie\\n applepie\\n spies\\n cherrypie\\n dog\\n cat\\n\\n\\n
To match all words containing 'pie' in this database on a fts table 'full_text' with field 'text':
\\n\\n
to-match <- SELECT word FROM words WHERE word LIKE '%pie%'\\n matcher = \\\"\\\"\\n a = \\\"\\\"\\n foreach(m, to-match) {\\n matcher += a\\n matcher += m\\n a = \\\" OR \\\"\\n }\\n SELECT text FROM full_text WHERE text MATCH matcher\\n\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/06"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/3400","https://Stackoverflow.com","https://Stackoverflow.com/users/35/"],"string":"[\n \"https://Stackoverflow.com/questions/3400\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/35/\"\n]"},"input":{"kind":"string","value":"**Note:** I *am* using SQL's Full-text search capabilities, CONTAINS clauses and all - the \\* is the wildcard in full-text, % is for LIKE clauses only.\n\nI've read in several places now that \"leading wildcard\" searches (e.g. using \"\\*overflow\" to match \"stackoverflow\") is not supported in MS SQL. I'm considering using a [CLR function to add regex matching](http://blogs.msdn.com/sqlclr/archive/2005/06/29/regex.aspx \"SQL CLR Blog\"), but I'm curious to see what other solutions people might have.\n\n**More Info**: [You can add the asterisk only at the end of the word or phrase.](http://msdn.microsoft.com/en-us/library/ms552152.aspx \"MSDN\") - along with my empirical experience: When matching \"myvalue\", \"my\\*\" works, but \"(asterisk)value\" returns no match, when doing a query as simple as:\n\n```\nSELECT * FROM TABLENAME WHERE CONTAINS(TextColumn, '\"*searchterm\"');\n\n```\n\nThus, my need for a workaround. I'm only using search in my site on an actual search page - so it needs to work basically the same way that Google works (in the eyes on a Joe Sixpack-type user). Not nearly as complicated, but this sort of match really shouldn't fail."},"output":{"kind":"string","value":"Workaround only for leading wildcard:\n\n* store the text reversed in a different field (or in materialised view)\n* create a full text index on this column\n* find the reversed text with an \\*\n\n```\nSELECT * \nFROM TABLENAME \nWHERE CONTAINS(TextColumnREV, '\"mrethcraes*\"');\n\n```\n\nOf course there are many drawbacks, just for quick workaround...\n\nNot to mention CONTAINSTABLE..."}}},{"rowIdx":383,"cells":{"qid":{"kind":"number","value":3432,"string":"3,432"},"question":{"kind":"string","value":"
I know that you can insert multiple rows at once, is there a way to update multiple rows at once (as in, in one query) in MySQL?
\n\n
Edit:\nFor example I have the following
\n\n
Name id Col1 Col2\nRow1 1 6 1\nRow2 2 2 3\nRow3 3 9 5\nRow4 4 16 8\n\n\n
I want to combine all the following Updates into one query
\n\n
UPDATE table SET Col1 = 1 WHERE id = 1;\nUPDATE table SET Col1 = 2 WHERE id = 2;\nUPDATE table SET Col2 = 3 WHERE id = 3;\nUPDATE table SET Col1 = 10 WHERE id = 4;\nUPDATE table SET Col2 = 12 WHERE id = 4;\n\n"},"answers":{"kind":"list like","value":[{"answer_id":3439,"author":"UnkwnTech","author_id":115,"author_profile":"https://Stackoverflow.com/users/115","pm_score":3,"selected":false,"text":"
UPDATE table1, table2 SET table1.col1='value', table2.col1='value' WHERE table1.col3='567' AND table2.col6='567'\n\n\n
This should work for ya.
\n\n
There is a reference in the MySQL manual for multiple tables.
\n"},{"answer_id":3444,"author":"GateKiller","author_id":383,"author_profile":"https://Stackoverflow.com/users/383","pm_score":-1,"selected":false,"text":"
The following will update all rows in one table
\n\n
Update Table Set\nColumn1 = 'New Value'\n\n\n
The next one will update all rows where the value of Column2 is more than 5
\n\n
Update Table Set\nColumn1 = 'New Value'\nWhere\nColumn2 > 5\n\n\n
There is all Unkwntech's example of updating more than one table
\n\n
UPDATE table1, table2 SET\ntable1.col1 = 'value',\ntable2.col1 = 'value'\nWHERE\ntable1.col3 = '567'\nAND table2.col6='567'\n\n"},{"answer_id":3445,"author":"Shawn","author_id":26,"author_profile":"https://Stackoverflow.com/users/26","pm_score":2,"selected":false,"text":"
You may also be interested in using joins on updates, which is possible as well.
\n\n
Update someTable Set someValue = 4 From someTable s Inner Join anotherTable a on s.id = a.id Where a.id = 4\n-- Only updates someValue in someTable who has a foreign key on anotherTable with a value of 4.\n\n\n
Edit: If the values you are updating aren't coming from somewhere else in the database, you'll need to issue multiple update queries.
\n"},{"answer_id":3449,"author":"UnkwnTech","author_id":115,"author_profile":"https://Stackoverflow.com/users/115","pm_score":-1,"selected":false,"text":"
UPDATE tableName SET col1='000' WHERE id='3' OR id='5'\n\n\n
This should achieve what you'r looking for. Just add more id's. I have tested it.
\n"},{"answer_id":3466,"author":"Michiel de Mare","author_id":136,"author_profile":"https://Stackoverflow.com/users/136","pm_score":10,"selected":true,"text":"
Yes, that's possible - you can use INSERT ... ON DUPLICATE KEY UPDATE.
\n\n
Using your example:
\n\n
INSERT INTO table (id,Col1,Col2) VALUES (1,1,1),(2,2,3),(3,9,3),(4,10,12)\nON DUPLICATE KEY UPDATE Col1=VALUES(Col1),Col2=VALUES(Col2);\n\n"},{"answer_id":84111,"author":"Harrison Fisk","author_id":16111,"author_profile":"https://Stackoverflow.com/users/16111","pm_score":7,"selected":false,"text":"
Since you have dynamic values, you need to use an IF or CASE for the columns to be updated. It gets kinda ugly, but it should work.
\n\n
Using your example, you could do it like:
\n\n
\nUPDATE table SET Col1 = CASE id \n WHEN 1 THEN 1 \n WHEN 2 THEN 2 \n WHEN 4 THEN 10 \n ELSE Col1 \n END, \n Col2 = CASE id \n WHEN 3 THEN 3 \n WHEN 4 THEN 12 \n ELSE Col2 \n END\n WHERE id IN (1, 2, 3, 4);\n
\n"},{"answer_id":5213557,"author":"Brooks","author_id":126001,"author_profile":"https://Stackoverflow.com/users/126001","pm_score":2,"selected":false,"text":"
There is a setting you can alter called 'multi statement' that disables MySQL's 'safety mechanism' implemented to prevent (more than one) injection command. Typical to MySQL's 'brilliant' implementation, it also prevents user from doing efficient queries.
\n\n
Here (http://dev.mysql.com/doc/refman/5.1/en/mysql-set-server-option.html) is some info on the C implementation of the setting.
\n\n
If you're using PHP, you can use mysqli to do multi statements (I think php has shipped with mysqli for a while now)
\n\n
$con = new mysqli('localhost','user1','password','my_database');\n$query = \"Update MyTable SET col1='some value' WHERE id=1 LIMIT 1;\";\n$query .= \"UPDATE MyTable SET col1='other value' WHERE id=2 LIMIT 1;\";\n//etc\n$con->multi_query($query);\n$con->close();\n\n\n
Hope that helps.
\n"},{"answer_id":5577503,"author":"Laymain","author_id":696291,"author_profile":"https://Stackoverflow.com/users/696291","pm_score":3,"selected":false,"text":"
Use a temporary table
\n
// Reorder items\nfunction update_items_tempdb(&$items)\n{\n shuffle($items);\n $table_name = uniqid('tmp_test_');\n $sql = "CREATE TEMPORARY TABLE `$table_name` ("\n ." `id` int(10) unsigned NOT NULL AUTO_INCREMENT"\n .", `position` int(10) unsigned NOT NULL"\n .", PRIMARY KEY (`id`)"\n .") ENGINE = MEMORY";\n query($sql);\n $i = 0;\n $sql = '';\n foreach ($items as &$item)\n {\n $item->position = $i++;\n $sql .= ($sql ? ', ' : '')."({$item->id}, {$item->position})";\n }\n if ($sql)\n {\n query("INSERT INTO `$table_name` (id, position) VALUES $sql");\n $sql = "UPDATE `test`, `$table_name` SET `test`.position = `$table_name`.position"\n ." WHERE `$table_name`.id = `test`.id";\n query($sql);\n }\n query("DROP TABLE `$table_name`");\n}\n\n"},{"answer_id":14128210,"author":"eggmatters","author_id":1010444,"author_profile":"https://Stackoverflow.com/users/1010444","pm_score":2,"selected":false,"text":"
You can alias the same table to give you the id's you want to insert by (if you are doing a row-by-row update:
\n\n
UPDATE table1 tab1, table1 tab2 -- alias references the same table\nSET \ncol1 = 1\n,col2 = 2\n. . . \nWHERE \ntab1.id = tab2.id;\n\n\n
Additionally, It should seem obvious that you can also update from other tables as well. In this case, the update doubles as a \"SELECT\" statement, giving you the data from the table you are specifying. You are explicitly stating in your query the update values so, the second table is unaffected.
\n"},{"answer_id":17284265,"author":"Roman Imankulov","author_id":848010,"author_profile":"https://Stackoverflow.com/users/848010","pm_score":7,"selected":false,"text":"
The question is old, yet I'd like to extend the topic with another answer.
\n\n
My point is, the easiest way to achieve it is just to wrap multiple queries with a transaction. The accepted answer INSERT ... ON DUPLICATE KEY UPDATE is a nice hack, but one should be aware of its drawbacks and limitations:
\n\n
\"Field 'fieldname' doesn't have a default value\" MySQL warning even if you don't insert a single row at all. It will get you into trouble, if you decide to be strict and turn mysql warnings into runtime exceptions in your app.\n\n
I made some performance tests for three of suggested variants, including the INSERT ... ON DUPLICATE KEY UPDATE variant, a variant with \"case / when / then\" clause and a naive approach with transaction. You may get the python code and results here. The overall conclusion is that the variant with case statement turns out to be twice as fast as two other variants, but it's quite hard to write correct and injection-safe code for it, so I personally stick to the simplest approach: using transactions.
\n\n
Edit: Findings of Dakusan prove that my performance estimations are not quite valid. Please see this answer for another, more elaborate research.
\n"},{"answer_id":18492422,"author":"user2082581","author_id":2082581,"author_profile":"https://Stackoverflow.com/users/2082581","pm_score":-1,"selected":false,"text":"
UPDATE `your_table` SET \n\n`something` = IF(`id`=\"1\",\"new_value1\",`something`), `smth2` = IF(`id`=\"1\", \"nv1\",`smth2`),\n`something` = IF(`id`=\"2\",\"new_value2\",`something`), `smth2` = IF(`id`=\"2\", \"nv2\",`smth2`),\n`something` = IF(`id`=\"4\",\"new_value3\",`something`), `smth2` = IF(`id`=\"4\", \"nv3\",`smth2`),\n`something` = IF(`id`=\"6\",\"new_value4\",`something`), `smth2` = IF(`id`=\"6\", \"nv4\",`smth2`),\n`something` = IF(`id`=\"3\",\"new_value5\",`something`), `smth2` = IF(`id`=\"3\", \"nv5\",`smth2`),\n`something` = IF(`id`=\"5\",\"new_value6\",`something`), `smth2` = IF(`id`=\"5\", \"nv6\",`smth2`) \n\n\n
// You just building it in php like
\n\n
$q = 'UPDATE `your_table` SET ';\n\nforeach($data as $dat){\n\n $q .= '\n\n `something` = IF(`id`=\"'.$dat->id.'\",\"'.$dat->value.'\",`something`), \n `smth2` = IF(`id`=\"'.$dat->id.'\", \"'.$dat->value2.'\",`smth2`),';\n\n}\n\n$q = substr($q,0,-1);\n\n\n
So you can update hole table with one query
\n"},{"answer_id":19033152,"author":"newtover","author_id":68998,"author_profile":"https://Stackoverflow.com/users/68998","pm_score":6,"selected":false,"text":"
Not sure why another useful option is not yet mentioned:
\n\n
UPDATE my_table m\nJOIN (\n SELECT 1 as id, 10 as _col1, 20 as _col2\n UNION ALL\n SELECT 2, 5, 10\n UNION ALL\n SELECT 3, 15, 30\n) vals ON m.id = vals.id\nSET col1 = _col1, col2 = _col2;\n\n"},{"answer_id":25217509,"author":"sara191186","author_id":2144822,"author_profile":"https://Stackoverflow.com/users/2144822","pm_score":0,"selected":false,"text":"
Yes ..it is possible using INSERT ON DUPLICATE KEY UPDATE sql statement..\nsyntax:\nINSERT INTO table_name (a,b,c) VALUES (1,2,3),(4,5,6)\n ON DUPLICATE KEY UPDATE a=VALUES(a),b=VALUES(b),c=VALUES(c)
\n"},{"answer_id":36017552,"author":"Justin Levene","author_id":1938802,"author_profile":"https://Stackoverflow.com/users/1938802","pm_score":0,"selected":false,"text":"
use
\n\n
REPLACE INTO`table` VALUES (`id`,`col1`,`col2`) VALUES\n(1,6,1),(2,2,3),(3,9,5),(4,16,8);\n\n\n
Please note:
\n\n
\n"},{"answer_id":39831043,"author":"Dakusan","author_id":698632,"author_profile":"https://Stackoverflow.com/users/698632","pm_score":6,"selected":false,"text":"
All of the following applies to InnoDB.
\n
I feel knowing the speeds of the 3 different methods is important.
\n
There are 3 methods:
\n
\n
I just tested this, and the INSERT method was 6.7x faster for me than the TRANSACTION method. I tried on a set of both 3,000 and 30,000 rows.
\n
The TRANSACTION method still has to run each individually query, which takes time, though it batches the results in memory, or something, while executing. The TRANSACTION method is also pretty expensive in both replication and query logs.
\n
Even worse, the CASE method was 41.1x slower than the INSERT method w/ 30,000 records (6.1x slower than TRANSACTION). And 75x slower in MyISAM. INSERT and CASE methods broke even at ~1,000 records. Even at 100 records, the CASE method is BARELY faster.
\n
So in general, I feel the INSERT method is both best and easiest to use. The queries are smaller and easier to read and only take up 1 query of action. This applies to both InnoDB and MyISAM.
\n
Bonus stuff:
\n
The solution for the INSERT non-default-field problem is to temporarily turn off the relevant SQL modes: SET SESSION sql_mode=REPLACE(REPLACE(@@SESSION.sql_mode,"STRICT_TRANS_TABLES",""),"STRICT_ALL_TABLES",""). Make sure to save the sql_mode first if you plan on reverting it.
\n
As for other comments I've seen that say the auto_increment goes up using the INSERT method, this does seem to be the case in InnoDB, but not MyISAM.
\n
Code to run the tests is as follows. It also outputs .SQL files to remove php interpreter overhead
\n
<?php\n//Variables\n$NumRows=30000;\n\n//These 2 functions need to be filled in\nfunction InitSQL()\n{\n\n}\nfunction RunSQLQuery($Q)\n{\n\n}\n\n//Run the 3 tests\nInitSQL();\nfor($i=0;$i<3;$i++)\n RunTest($i, $NumRows);\n\nfunction RunTest($TestNum, $NumRows)\n{\n $TheQueries=Array();\n $DoQuery=function($Query) use (&$TheQueries)\n {\n RunSQLQuery($Query);\n $TheQueries[]=$Query;\n };\n\n $TableName='Test';\n $DoQuery('DROP TABLE IF EXISTS '.$TableName);\n $DoQuery('CREATE TABLE '.$TableName.' (i1 int NOT NULL AUTO_INCREMENT, i2 int NOT NULL, primary key (i1)) ENGINE=InnoDB');\n $DoQuery('INSERT INTO '.$TableName.' (i2) VALUES ('.implode('), (', range(2, $NumRows+1)).')');\n\n if($TestNum==0)\n {\n $TestName='Transaction';\n $Start=microtime(true);\n $DoQuery('START TRANSACTION');\n for($i=1;$i<=$NumRows;$i++)\n $DoQuery('UPDATE '.$TableName.' SET i2='.(($i+5)*1000).' WHERE i1='.$i);\n $DoQuery('COMMIT');\n }\n \n if($TestNum==1)\n {\n $TestName='Insert';\n $Query=Array();\n for($i=1;$i<=$NumRows;$i++)\n $Query[]=sprintf("(%d,%d)", $i, (($i+5)*1000));\n $Start=microtime(true);\n $DoQuery('INSERT INTO '.$TableName.' VALUES '.implode(', ', $Query).' ON DUPLICATE KEY UPDATE i2=VALUES(i2)');\n }\n \n if($TestNum==2)\n {\n $TestName='Case';\n $Query=Array();\n for($i=1;$i<=$NumRows;$i++)\n $Query[]=sprintf('WHEN %d THEN %d', $i, (($i+5)*1000));\n $Start=microtime(true);\n $DoQuery("UPDATE $TableName SET i2=CASE i1\\n".implode("\\n", $Query)."\\nEND\\nWHERE i1 IN (".implode(',', range(1, $NumRows)).')');\n }\n \n print "$TestName: ".(microtime(true)-$Start)."<br>\\n";\n\n file_put_contents("./$TestName.sql", implode(";\\n", $TheQueries).';');\n}\n\n"},{"answer_id":44931466,"author":"mononoke","author_id":6088837,"author_profile":"https://Stackoverflow.com/users/6088837","pm_score":3,"selected":false,"text":"
Why does no one mention multiple statements in one query?
\n\n
In php, you use multi_query method of mysqli instance.
\n\n
From the php manual
\n\n
\nMySQL optionally allows having multiple statements in one statement string. Sending multiple statements at once reduces client-server round trips but requires special handling.
\n
\n\n
Here is the result comparing to other 3 methods in update 30,000 raw. Code can be found here which is based on answer from @Dakusan
\n\n
Transaction: 5.5194580554962
\nInsert: 0.20669293403625
\nCase: 16.474853992462
\nMulti: 0.0412278175354
\n\n
As you can see, multiple statements query is more efficient than the highest answer.
\n\n
If you get error message like this:
\n\n
PHP Warning: Error while sending SET_OPTION packet\n\n\n
You may need to increase the max_allowed_packet in mysql config file which in my machine is /etc/mysql/my.cnf and then restart mysqld.
\n"},{"answer_id":61643990,"author":"Stan Sokolov","author_id":1610778,"author_profile":"https://Stackoverflow.com/users/1610778","pm_score":1,"selected":false,"text":"
And now the easy way
\n
update my_table m, -- let create a temp table with populated values\n (select 1 as id, 20 as value union -- this part will be generated\n select 2 as id, 30 as value union -- using a backend code\n -- for loop \n select N as id, X as value\n ) t\nset m.value = t.value where t.id=m.id -- now update by join - quick\n\n"},{"answer_id":65950733,"author":"Liam","author_id":3714181,"author_profile":"https://Stackoverflow.com/users/3714181","pm_score":0,"selected":false,"text":"
I took the answer from @newtover and extended it using the new json_table function in MySql 8. This allows you to create a stored procedure to handle the workload rather than building your own SQL text in code:
\n
drop table if exists `test`;\ncreate table `test` (\n `Id` int,\n `Number` int,\n PRIMARY KEY (`Id`)\n);\ninsert into test (Id, Number) values (1, 1), (2, 2);\n\nDROP procedure IF EXISTS `Test`;\nDELIMITER $$\nCREATE PROCEDURE `Test`(\n p_json json\n)\nBEGIN\n update test s\n join json_table(p_json, '$[*]' columns(`id` int path '$.id', `number` int path '$.number')) v \n on s.Id=v.id set s.Number=v.number;\nEND$$\nDELIMITER ;\n\ncall `Test`('[{"id": 1, "number": 10}, {"id": 2, "number": 20}]');\nselect * from test;\n\ndrop table if exists `test`;\n\n
It's a few ms slower than pure SQL but I'm happy to take the hit rather than generate the sql text in code. Not sure how performant it is with huge recordsets (the JSON object has a max size of 1Gb) but I use it all the time when updating 10k rows at a time.
\n"}],"string":"[\n {\n \"answer_id\": 3439,\n \"author\": \"UnkwnTech\",\n \"author_id\": 115,\n \"author_profile\": \"https://Stackoverflow.com/users/115\",\n \"pm_score\": 3,\n \"selected\": false,\n \"text\": \"
UPDATE table1, table2 SET table1.col1='value', table2.col1='value' WHERE table1.col3='567' AND table2.col6='567'\\n\\n\\n
This should work for ya.
\\n\\n
There is a reference in the MySQL manual for multiple tables.
\\n\"\n },\n {\n \"answer_id\": 3444,\n \"author\": \"GateKiller\",\n \"author_id\": 383,\n \"author_profile\": \"https://Stackoverflow.com/users/383\",\n \"pm_score\": -1,\n \"selected\": false,\n \"text\": \"
The following will update all rows in one table
\\n\\n
Update Table Set\\nColumn1 = 'New Value'\\n\\n\\n
The next one will update all rows where the value of Column2 is more than 5
\\n\\n
Update Table Set\\nColumn1 = 'New Value'\\nWhere\\nColumn2 > 5\\n\\n\\n
There is all Unkwntech's example of updating more than one table
\\n\\n
UPDATE table1, table2 SET\\ntable1.col1 = 'value',\\ntable2.col1 = 'value'\\nWHERE\\ntable1.col3 = '567'\\nAND table2.col6='567'\\n\\n\"\n },\n {\n \"answer_id\": 3445,\n \"author\": \"Shawn\",\n \"author_id\": 26,\n \"author_profile\": \"https://Stackoverflow.com/users/26\",\n \"pm_score\": 2,\n \"selected\": false,\n \"text\": \"
You may also be interested in using joins on updates, which is possible as well.
\\n\\n
Update someTable Set someValue = 4 From someTable s Inner Join anotherTable a on s.id = a.id Where a.id = 4\\n-- Only updates someValue in someTable who has a foreign key on anotherTable with a value of 4.\\n\\n\\n
Edit: If the values you are updating aren't coming from somewhere else in the database, you'll need to issue multiple update queries.
\\n\"\n },\n {\n \"answer_id\": 3449,\n \"author\": \"UnkwnTech\",\n \"author_id\": 115,\n \"author_profile\": \"https://Stackoverflow.com/users/115\",\n \"pm_score\": -1,\n \"selected\": false,\n \"text\": \"
UPDATE tableName SET col1='000' WHERE id='3' OR id='5'\\n\\n\\n
This should achieve what you'r looking for. Just add more id's. I have tested it.
\\n\"\n },\n {\n \"answer_id\": 3466,\n \"author\": \"Michiel de Mare\",\n \"author_id\": 136,\n \"author_profile\": \"https://Stackoverflow.com/users/136\",\n \"pm_score\": 10,\n \"selected\": true,\n \"text\": \"
Yes, that's possible - you can use INSERT ... ON DUPLICATE KEY UPDATE.
\\n\\n
Using your example:
\\n\\n
INSERT INTO table (id,Col1,Col2) VALUES (1,1,1),(2,2,3),(3,9,3),(4,10,12)\\nON DUPLICATE KEY UPDATE Col1=VALUES(Col1),Col2=VALUES(Col2);\\n\\n\"\n },\n {\n \"answer_id\": 84111,\n \"author\": \"Harrison Fisk\",\n \"author_id\": 16111,\n \"author_profile\": \"https://Stackoverflow.com/users/16111\",\n \"pm_score\": 7,\n \"selected\": false,\n \"text\": \"
Since you have dynamic values, you need to use an IF or CASE for the columns to be updated. It gets kinda ugly, but it should work.
\\n\\n
Using your example, you could do it like:
\\n\\n
\\nUPDATE table SET Col1 = CASE id \\n WHEN 1 THEN 1 \\n WHEN 2 THEN 2 \\n WHEN 4 THEN 10 \\n ELSE Col1 \\n END, \\n Col2 = CASE id \\n WHEN 3 THEN 3 \\n WHEN 4 THEN 12 \\n ELSE Col2 \\n END\\n WHERE id IN (1, 2, 3, 4);\\n
\\n\"\n },\n {\n \"answer_id\": 5213557,\n \"author\": \"Brooks\",\n \"author_id\": 126001,\n \"author_profile\": \"https://Stackoverflow.com/users/126001\",\n \"pm_score\": 2,\n \"selected\": false,\n \"text\": \"
There is a setting you can alter called 'multi statement' that disables MySQL's 'safety mechanism' implemented to prevent (more than one) injection command. Typical to MySQL's 'brilliant' implementation, it also prevents user from doing efficient queries.
\\n\\n
Here (http://dev.mysql.com/doc/refman/5.1/en/mysql-set-server-option.html) is some info on the C implementation of the setting.
\\n\\n
If you're using PHP, you can use mysqli to do multi statements (I think php has shipped with mysqli for a while now)
\\n\\n
$con = new mysqli('localhost','user1','password','my_database');\\n$query = \\\"Update MyTable SET col1='some value' WHERE id=1 LIMIT 1;\\\";\\n$query .= \\\"UPDATE MyTable SET col1='other value' WHERE id=2 LIMIT 1;\\\";\\n//etc\\n$con->multi_query($query);\\n$con->close();\\n\\n\\n
Hope that helps.
\\n\"\n },\n {\n \"answer_id\": 5577503,\n \"author\": \"Laymain\",\n \"author_id\": 696291,\n \"author_profile\": \"https://Stackoverflow.com/users/696291\",\n \"pm_score\": 3,\n \"selected\": false,\n \"text\": \"
Use a temporary table
\\n
// Reorder items\\nfunction update_items_tempdb(&$items)\\n{\\n shuffle($items);\\n $table_name = uniqid('tmp_test_');\\n $sql = "CREATE TEMPORARY TABLE `$table_name` ("\\n ." `id` int(10) unsigned NOT NULL AUTO_INCREMENT"\\n .", `position` int(10) unsigned NOT NULL"\\n .", PRIMARY KEY (`id`)"\\n .") ENGINE = MEMORY";\\n query($sql);\\n $i = 0;\\n $sql = '';\\n foreach ($items as &$item)\\n {\\n $item->position = $i++;\\n $sql .= ($sql ? ', ' : '')."({$item->id}, {$item->position})";\\n }\\n if ($sql)\\n {\\n query("INSERT INTO `$table_name` (id, position) VALUES $sql");\\n $sql = "UPDATE `test`, `$table_name` SET `test`.position = `$table_name`.position"\\n ." WHERE `$table_name`.id = `test`.id";\\n query($sql);\\n }\\n query("DROP TABLE `$table_name`");\\n}\\n\\n\"\n },\n {\n \"answer_id\": 14128210,\n \"author\": \"eggmatters\",\n \"author_id\": 1010444,\n \"author_profile\": \"https://Stackoverflow.com/users/1010444\",\n \"pm_score\": 2,\n \"selected\": false,\n \"text\": \"
You can alias the same table to give you the id's you want to insert by (if you are doing a row-by-row update:
\\n\\n
UPDATE table1 tab1, table1 tab2 -- alias references the same table\\nSET \\ncol1 = 1\\n,col2 = 2\\n. . . \\nWHERE \\ntab1.id = tab2.id;\\n\\n\\n
Additionally, It should seem obvious that you can also update from other tables as well. In this case, the update doubles as a \\\"SELECT\\\" statement, giving you the data from the table you are specifying. You are explicitly stating in your query the update values so, the second table is unaffected.
\\n\"\n },\n {\n \"answer_id\": 17284265,\n \"author\": \"Roman Imankulov\",\n \"author_id\": 848010,\n \"author_profile\": \"https://Stackoverflow.com/users/848010\",\n \"pm_score\": 7,\n \"selected\": false,\n \"text\": \"
The question is old, yet I'd like to extend the topic with another answer.
\\n\\n
My point is, the easiest way to achieve it is just to wrap multiple queries with a transaction. The accepted answer INSERT ... ON DUPLICATE KEY UPDATE is a nice hack, but one should be aware of its drawbacks and limitations:
\\n\\n
\\\"Field 'fieldname' doesn't have a default value\\\" MySQL warning even if you don't insert a single row at all. It will get you into trouble, if you decide to be strict and turn mysql warnings into runtime exceptions in your app.\\n\\n
I made some performance tests for three of suggested variants, including the INSERT ... ON DUPLICATE KEY UPDATE variant, a variant with \\\"case / when / then\\\" clause and a naive approach with transaction. You may get the python code and results here. The overall conclusion is that the variant with case statement turns out to be twice as fast as two other variants, but it's quite hard to write correct and injection-safe code for it, so I personally stick to the simplest approach: using transactions.
\\n\\n
Edit: Findings of Dakusan prove that my performance estimations are not quite valid. Please see this answer for another, more elaborate research.
\\n\"\n },\n {\n \"answer_id\": 18492422,\n \"author\": \"user2082581\",\n \"author_id\": 2082581,\n \"author_profile\": \"https://Stackoverflow.com/users/2082581\",\n \"pm_score\": -1,\n \"selected\": false,\n \"text\": \"
UPDATE `your_table` SET \\n\\n`something` = IF(`id`=\\\"1\\\",\\\"new_value1\\\",`something`), `smth2` = IF(`id`=\\\"1\\\", \\\"nv1\\\",`smth2`),\\n`something` = IF(`id`=\\\"2\\\",\\\"new_value2\\\",`something`), `smth2` = IF(`id`=\\\"2\\\", \\\"nv2\\\",`smth2`),\\n`something` = IF(`id`=\\\"4\\\",\\\"new_value3\\\",`something`), `smth2` = IF(`id`=\\\"4\\\", \\\"nv3\\\",`smth2`),\\n`something` = IF(`id`=\\\"6\\\",\\\"new_value4\\\",`something`), `smth2` = IF(`id`=\\\"6\\\", \\\"nv4\\\",`smth2`),\\n`something` = IF(`id`=\\\"3\\\",\\\"new_value5\\\",`something`), `smth2` = IF(`id`=\\\"3\\\", \\\"nv5\\\",`smth2`),\\n`something` = IF(`id`=\\\"5\\\",\\\"new_value6\\\",`something`), `smth2` = IF(`id`=\\\"5\\\", \\\"nv6\\\",`smth2`) \\n\\n\\n
// You just building it in php like
\\n\\n
$q = 'UPDATE `your_table` SET ';\\n\\nforeach($data as $dat){\\n\\n $q .= '\\n\\n `something` = IF(`id`=\\\"'.$dat->id.'\\\",\\\"'.$dat->value.'\\\",`something`), \\n `smth2` = IF(`id`=\\\"'.$dat->id.'\\\", \\\"'.$dat->value2.'\\\",`smth2`),';\\n\\n}\\n\\n$q = substr($q,0,-1);\\n\\n\\n
So you can update hole table with one query
\\n\"\n },\n {\n \"answer_id\": 19033152,\n \"author\": \"newtover\",\n \"author_id\": 68998,\n \"author_profile\": \"https://Stackoverflow.com/users/68998\",\n \"pm_score\": 6,\n \"selected\": false,\n \"text\": \"
Not sure why another useful option is not yet mentioned:
\\n\\n
UPDATE my_table m\\nJOIN (\\n SELECT 1 as id, 10 as _col1, 20 as _col2\\n UNION ALL\\n SELECT 2, 5, 10\\n UNION ALL\\n SELECT 3, 15, 30\\n) vals ON m.id = vals.id\\nSET col1 = _col1, col2 = _col2;\\n\\n\"\n },\n {\n \"answer_id\": 25217509,\n \"author\": \"sara191186\",\n \"author_id\": 2144822,\n \"author_profile\": \"https://Stackoverflow.com/users/2144822\",\n \"pm_score\": 0,\n \"selected\": false,\n \"text\": \"
Yes ..it is possible using INSERT ON DUPLICATE KEY UPDATE sql statement..\\nsyntax:\\nINSERT INTO table_name (a,b,c) VALUES (1,2,3),(4,5,6)\\n ON DUPLICATE KEY UPDATE a=VALUES(a),b=VALUES(b),c=VALUES(c)
\\n\"\n },\n {\n \"answer_id\": 36017552,\n \"author\": \"Justin Levene\",\n \"author_id\": 1938802,\n \"author_profile\": \"https://Stackoverflow.com/users/1938802\",\n \"pm_score\": 0,\n \"selected\": false,\n \"text\": \"
use
\\n\\n
REPLACE INTO`table` VALUES (`id`,`col1`,`col2`) VALUES\\n(1,6,1),(2,2,3),(3,9,5),(4,16,8);\\n\\n\\n
Please note:
\\n\\n
\\n\"\n },\n {\n \"answer_id\": 39831043,\n \"author\": \"Dakusan\",\n \"author_id\": 698632,\n \"author_profile\": \"https://Stackoverflow.com/users/698632\",\n \"pm_score\": 6,\n \"selected\": false,\n \"text\": \"
All of the following applies to InnoDB.
\\n
I feel knowing the speeds of the 3 different methods is important.
\\n
There are 3 methods:
\\n
\\n
I just tested this, and the INSERT method was 6.7x faster for me than the TRANSACTION method. I tried on a set of both 3,000 and 30,000 rows.
\\n
The TRANSACTION method still has to run each individually query, which takes time, though it batches the results in memory, or something, while executing. The TRANSACTION method is also pretty expensive in both replication and query logs.
\\n
Even worse, the CASE method was 41.1x slower than the INSERT method w/ 30,000 records (6.1x slower than TRANSACTION). And 75x slower in MyISAM. INSERT and CASE methods broke even at ~1,000 records. Even at 100 records, the CASE method is BARELY faster.
\\n
So in general, I feel the INSERT method is both best and easiest to use. The queries are smaller and easier to read and only take up 1 query of action. This applies to both InnoDB and MyISAM.
\\n
Bonus stuff:
\\n
The solution for the INSERT non-default-field problem is to temporarily turn off the relevant SQL modes: SET SESSION sql_mode=REPLACE(REPLACE(@@SESSION.sql_mode,"STRICT_TRANS_TABLES",""),"STRICT_ALL_TABLES",""). Make sure to save the sql_mode first if you plan on reverting it.
\\n
As for other comments I've seen that say the auto_increment goes up using the INSERT method, this does seem to be the case in InnoDB, but not MyISAM.
\\n
Code to run the tests is as follows. It also outputs .SQL files to remove php interpreter overhead
\\n
<?php\\n//Variables\\n$NumRows=30000;\\n\\n//These 2 functions need to be filled in\\nfunction InitSQL()\\n{\\n\\n}\\nfunction RunSQLQuery($Q)\\n{\\n\\n}\\n\\n//Run the 3 tests\\nInitSQL();\\nfor($i=0;$i<3;$i++)\\n RunTest($i, $NumRows);\\n\\nfunction RunTest($TestNum, $NumRows)\\n{\\n $TheQueries=Array();\\n $DoQuery=function($Query) use (&$TheQueries)\\n {\\n RunSQLQuery($Query);\\n $TheQueries[]=$Query;\\n };\\n\\n $TableName='Test';\\n $DoQuery('DROP TABLE IF EXISTS '.$TableName);\\n $DoQuery('CREATE TABLE '.$TableName.' (i1 int NOT NULL AUTO_INCREMENT, i2 int NOT NULL, primary key (i1)) ENGINE=InnoDB');\\n $DoQuery('INSERT INTO '.$TableName.' (i2) VALUES ('.implode('), (', range(2, $NumRows+1)).')');\\n\\n if($TestNum==0)\\n {\\n $TestName='Transaction';\\n $Start=microtime(true);\\n $DoQuery('START TRANSACTION');\\n for($i=1;$i<=$NumRows;$i++)\\n $DoQuery('UPDATE '.$TableName.' SET i2='.(($i+5)*1000).' WHERE i1='.$i);\\n $DoQuery('COMMIT');\\n }\\n \\n if($TestNum==1)\\n {\\n $TestName='Insert';\\n $Query=Array();\\n for($i=1;$i<=$NumRows;$i++)\\n $Query[]=sprintf("(%d,%d)", $i, (($i+5)*1000));\\n $Start=microtime(true);\\n $DoQuery('INSERT INTO '.$TableName.' VALUES '.implode(', ', $Query).' ON DUPLICATE KEY UPDATE i2=VALUES(i2)');\\n }\\n \\n if($TestNum==2)\\n {\\n $TestName='Case';\\n $Query=Array();\\n for($i=1;$i<=$NumRows;$i++)\\n $Query[]=sprintf('WHEN %d THEN %d', $i, (($i+5)*1000));\\n $Start=microtime(true);\\n $DoQuery("UPDATE $TableName SET i2=CASE i1\\\\n".implode("\\\\n", $Query)."\\\\nEND\\\\nWHERE i1 IN (".implode(',', range(1, $NumRows)).')');\\n }\\n \\n print "$TestName: ".(microtime(true)-$Start)."<br>\\\\n";\\n\\n file_put_contents("./$TestName.sql", implode(";\\\\n", $TheQueries).';');\\n}\\n\\n\"\n },\n {\n \"answer_id\": 44931466,\n \"author\": \"mononoke\",\n \"author_id\": 6088837,\n \"author_profile\": \"https://Stackoverflow.com/users/6088837\",\n \"pm_score\": 3,\n \"selected\": false,\n \"text\": \"
Why does no one mention multiple statements in one query?
\\n\\n
In php, you use multi_query method of mysqli instance.
\\n\\n
From the php manual
\\n\\n
\\nMySQL optionally allows having multiple statements in one statement string. Sending multiple statements at once reduces client-server round trips but requires special handling.
\\n
\\n\\n
Here is the result comparing to other 3 methods in update 30,000 raw. Code can be found here which is based on answer from @Dakusan
\\n\\n
Transaction: 5.5194580554962
\\nInsert: 0.20669293403625
\\nCase: 16.474853992462
\\nMulti: 0.0412278175354
\\n\\n
As you can see, multiple statements query is more efficient than the highest answer.
\\n\\n
If you get error message like this:
\\n\\n
PHP Warning: Error while sending SET_OPTION packet\\n\\n\\n
You may need to increase the max_allowed_packet in mysql config file which in my machine is /etc/mysql/my.cnf and then restart mysqld.
\\n\"\n },\n {\n \"answer_id\": 61643990,\n \"author\": \"Stan Sokolov\",\n \"author_id\": 1610778,\n \"author_profile\": \"https://Stackoverflow.com/users/1610778\",\n \"pm_score\": 1,\n \"selected\": false,\n \"text\": \"
And now the easy way
\\n
update my_table m, -- let create a temp table with populated values\\n (select 1 as id, 20 as value union -- this part will be generated\\n select 2 as id, 30 as value union -- using a backend code\\n -- for loop \\n select N as id, X as value\\n ) t\\nset m.value = t.value where t.id=m.id -- now update by join - quick\\n\\n\"\n },\n {\n \"answer_id\": 65950733,\n \"author\": \"Liam\",\n \"author_id\": 3714181,\n \"author_profile\": \"https://Stackoverflow.com/users/3714181\",\n \"pm_score\": 0,\n \"selected\": false,\n \"text\": \"
I took the answer from @newtover and extended it using the new json_table function in MySql 8. This allows you to create a stored procedure to handle the workload rather than building your own SQL text in code:
\\n
drop table if exists `test`;\\ncreate table `test` (\\n `Id` int,\\n `Number` int,\\n PRIMARY KEY (`Id`)\\n);\\ninsert into test (Id, Number) values (1, 1), (2, 2);\\n\\nDROP procedure IF EXISTS `Test`;\\nDELIMITER $$\\nCREATE PROCEDURE `Test`(\\n p_json json\\n)\\nBEGIN\\n update test s\\n join json_table(p_json, '$[*]' columns(`id` int path '$.id', `number` int path '$.number')) v \\n on s.Id=v.id set s.Number=v.number;\\nEND$$\\nDELIMITER ;\\n\\ncall `Test`('[{"id": 1, "number": 10}, {"id": 2, "number": 20}]');\\nselect * from test;\\n\\ndrop table if exists `test`;\\n\\n
It's a few ms slower than pure SQL but I'm happy to take the hit rather than generate the sql text in code. Not sure how performant it is with huge recordsets (the JSON object has a max size of 1Gb) but I use it all the time when updating 10k rows at a time.
\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/06"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/3432","https://Stackoverflow.com","https://Stackoverflow.com/users/1384652/"],"string":"[\n \"https://Stackoverflow.com/questions/3432\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/1384652/\"\n]"},"input":{"kind":"string","value":"I know that you can insert multiple rows at once, is there a way to update multiple rows at once (as in, in one query) in MySQL?\n\nEdit:\nFor example I have the following\n\n```\nName id Col1 Col2\nRow1 1 6 1\nRow2 2 2 3\nRow3 3 9 5\nRow4 4 16 8\n\n```\n\nI want to combine all the following Updates into one query\n\n```\nUPDATE table SET Col1 = 1 WHERE id = 1;\nUPDATE table SET Col1 = 2 WHERE id = 2;\nUPDATE table SET Col2 = 3 WHERE id = 3;\nUPDATE table SET Col1 = 10 WHERE id = 4;\nUPDATE table SET Col2 = 12 WHERE id = 4;\n\n```"},"output":{"kind":"string","value":"Yes, that's possible - you can use INSERT ... ON DUPLICATE KEY UPDATE.\n\nUsing your example:\n\n```\nINSERT INTO table (id,Col1,Col2) VALUES (1,1,1),(2,2,3),(3,9,3),(4,10,12)\nON DUPLICATE KEY UPDATE Col1=VALUES(Col1),Col2=VALUES(Col2);\n\n```"}}},{"rowIdx":384,"cells":{"qid":{"kind":"number","value":3437,"string":"3,437"},"question":{"kind":"string","value":"
We recently discovered that the Google Maps API does not play nicely with SSL. Fair enough, but what are some options for overcoming this that others have used effectively?
\n\n
\nWill the Maps API work over SSL (HTTPS)?
\n \nAt this time, the Maps API is not\n available over a secure (SSL)\n connection. If you are running the\n Maps API on a secure site, the browser\n may warn the user about non-secure\n objects on the screen.
\n
\n\n
We have considered the following options
\n\n
\n\n
Are there other options, or does anyone have insight into the options that we have considered?
\n"},"answers":{"kind":"list like","value":[{"answer_id":3476,"author":"GateKiller","author_id":383,"author_profile":"https://Stackoverflow.com/users/383","pm_score":2,"selected":false,"text":"
I would go with your first solution. This allows the user to focus on entering their credit card details.
\n\n
You can then transfer them to another webpage which asks or provides them further information relating to the Google Map.
\n"},{"answer_id":3898,"author":"palmsey","author_id":521,"author_profile":"https://Stackoverflow.com/users/521","pm_score":3,"selected":false,"text":"
This seems like a buisness requirements/usability issue - do you have a good reason for putting the map on the credit card page? If so, maybe it's worth working through some technical problems.
\n\n
You might try using Mapstraction, so you can switch to a provider that supports SSL, and switch back to Google if they support it in the future.
\n"},{"answer_id":20612,"author":"Gary","author_id":2330,"author_profile":"https://Stackoverflow.com/users/2330","pm_score":5,"selected":true,"text":"
I'd agree with the previous two answers that in this instance it may be better from a usability perspective to split the two functions into separate screens. You really want your users to be focussed on entering complete and accurate credit card information, and having a map on the same screen may be distracting.
\n\n
For the record though, Virtual Earth certainly does fully support SSL. To enable it you simple need to change the script reference from http:// to https:// and append &s=1 to the URL, e.g.
\n\n
<script src=\"http://dev.virtualearth.net/mapcontrol/mapcontrol.ashx?v=6.1\" type=\"text/javascript\"></script>\n\n\n
becomes
\n\n
<script src=\"https://dev.virtualearth.net/mapcontrol/mapcontrol.ashx?v=6.1&s=1\" type=\"text/javascript\"></script>\n\n"},{"answer_id":1500201,"author":"cope360","author_id":48044,"author_profile":"https://Stackoverflow.com/users/48044","pm_score":2,"selected":false,"text":"
If you are a Google Maps API Premier customer, then SSL is supported. We use this and it works well.
\n\n
Prior to Google making SSL available, we proxyed all the traffic and this worked acceptably. You lose the advantage of Google's CDN when you use this approach and you may get your IP banned since it will appear that you are generating a lot of traffic.
\n"},{"answer_id":5337403,"author":"Pasted","author_id":1308097,"author_profile":"https://Stackoverflow.com/users/1308097","pm_score":3,"selected":false,"text":"
Just to add to this
\n\n
http://googlegeodevelopers.blogspot.com/2011/03/maps-apis-over-ssl-now-available-to-all.html
\n\n
Haven't tried migrating my SSL maps (ended up using Bing maps api) back to Google yet but might well be on the cards.
\n"},{"answer_id":11800462,"author":"Bhupendra","author_id":1574777,"author_profile":"https://Stackoverflow.com/users/1574777","pm_score":1,"selected":false,"text":"
If you are getting SECURITY ALERT on IE 9 while displaying Google maps, use
\n\n
<script src=\"https://maps.google.com/maps?file=api&v=2&hl=en&tab=wl&z=6&sensor=true&key=<?php echo $key;?>\n\" type=\"text/javascript\"></script>\n\n\n
instead of
\n\n
<script src=\"https://maps.googleapis.com/maps/api/js?key=YOUR_API_KEY&sensor=SET_TO_TRUE_OR_FALSE\"\n type=\"text/javascript\"></script>\n\n"},{"answer_id":44341756,"author":"Panayiotis Hiripis","author_id":3342967,"author_profile":"https://Stackoverflow.com/users/3342967","pm_score":0,"selected":false,"text":"
I 've just removed the http protocol and it worked!
\n\n
From this:
\n\n
<script src=\"http://maps.google.com/maps/api/js?sensor=true\" type=\"text/javascript\"></script>\n\n\n
To this:
\n\n
<script src=\"//maps.google.com/maps/api/js?sensor=true\" type=\"text/javascript\"></script>\n\n"}],"string":"[\n {\n \"answer_id\": 3476,\n \"author\": \"GateKiller\",\n \"author_id\": 383,\n \"author_profile\": \"https://Stackoverflow.com/users/383\",\n \"pm_score\": 2,\n \"selected\": false,\n \"text\": \"
I would go with your first solution. This allows the user to focus on entering their credit card details.
\\n\\n
You can then transfer them to another webpage which asks or provides them further information relating to the Google Map.
\\n\"\n },\n {\n \"answer_id\": 3898,\n \"author\": \"palmsey\",\n \"author_id\": 521,\n \"author_profile\": \"https://Stackoverflow.com/users/521\",\n \"pm_score\": 3,\n \"selected\": false,\n \"text\": \"
This seems like a buisness requirements/usability issue - do you have a good reason for putting the map on the credit card page? If so, maybe it's worth working through some technical problems.
\\n\\n
You might try using Mapstraction, so you can switch to a provider that supports SSL, and switch back to Google if they support it in the future.
\\n\"\n },\n {\n \"answer_id\": 20612,\n \"author\": \"Gary\",\n \"author_id\": 2330,\n \"author_profile\": \"https://Stackoverflow.com/users/2330\",\n \"pm_score\": 5,\n \"selected\": true,\n \"text\": \"
I'd agree with the previous two answers that in this instance it may be better from a usability perspective to split the two functions into separate screens. You really want your users to be focussed on entering complete and accurate credit card information, and having a map on the same screen may be distracting.
\\n\\n
For the record though, Virtual Earth certainly does fully support SSL. To enable it you simple need to change the script reference from http:// to https:// and append &s=1 to the URL, e.g.
\\n\\n
<script src=\\\"http://dev.virtualearth.net/mapcontrol/mapcontrol.ashx?v=6.1\\\" type=\\\"text/javascript\\\"></script>\\n\\n\\n
becomes
\\n\\n
<script src=\\\"https://dev.virtualearth.net/mapcontrol/mapcontrol.ashx?v=6.1&s=1\\\" type=\\\"text/javascript\\\"></script>\\n\\n\"\n },\n {\n \"answer_id\": 1500201,\n \"author\": \"cope360\",\n \"author_id\": 48044,\n \"author_profile\": \"https://Stackoverflow.com/users/48044\",\n \"pm_score\": 2,\n \"selected\": false,\n \"text\": \"
If you are a Google Maps API Premier customer, then SSL is supported. We use this and it works well.
\\n\\n
Prior to Google making SSL available, we proxyed all the traffic and this worked acceptably. You lose the advantage of Google's CDN when you use this approach and you may get your IP banned since it will appear that you are generating a lot of traffic.
\\n\"\n },\n {\n \"answer_id\": 5337403,\n \"author\": \"Pasted\",\n \"author_id\": 1308097,\n \"author_profile\": \"https://Stackoverflow.com/users/1308097\",\n \"pm_score\": 3,\n \"selected\": false,\n \"text\": \"
Just to add to this
\\n\\n
http://googlegeodevelopers.blogspot.com/2011/03/maps-apis-over-ssl-now-available-to-all.html
\\n\\n
Haven't tried migrating my SSL maps (ended up using Bing maps api) back to Google yet but might well be on the cards.
\\n\"\n },\n {\n \"answer_id\": 11800462,\n \"author\": \"Bhupendra\",\n \"author_id\": 1574777,\n \"author_profile\": \"https://Stackoverflow.com/users/1574777\",\n \"pm_score\": 1,\n \"selected\": false,\n \"text\": \"
If you are getting SECURITY ALERT on IE 9 while displaying Google maps, use
\\n\\n
<script src=\\\"https://maps.google.com/maps?file=api&v=2&hl=en&tab=wl&z=6&sensor=true&key=<?php echo $key;?>\\n\\\" type=\\\"text/javascript\\\"></script>\\n\\n\\n
instead of
\\n\\n
<script src=\\\"https://maps.googleapis.com/maps/api/js?key=YOUR_API_KEY&sensor=SET_TO_TRUE_OR_FALSE\\\"\\n type=\\\"text/javascript\\\"></script>\\n\\n\"\n },\n {\n \"answer_id\": 44341756,\n \"author\": \"Panayiotis Hiripis\",\n \"author_id\": 3342967,\n \"author_profile\": \"https://Stackoverflow.com/users/3342967\",\n \"pm_score\": 0,\n \"selected\": false,\n \"text\": \"
I 've just removed the http protocol and it worked!
\\n\\n
From this:
\\n\\n
<script src=\\\"http://maps.google.com/maps/api/js?sensor=true\\\" type=\\\"text/javascript\\\"></script>\\n\\n\\n
To this:
\\n\\n
<script src=\\\"//maps.google.com/maps/api/js?sensor=true\\\" type=\\\"text/javascript\\\"></script>\\n\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/06"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/3437","https://Stackoverflow.com","https://Stackoverflow.com/users/308/"],"string":"[\n \"https://Stackoverflow.com/questions/3437\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/308/\"\n]"},"input":{"kind":"string","value":"We recently discovered that the Google Maps API does not play nicely with SSL. Fair enough, but what are some options for overcoming this that others have used effectively?\n\n> \n> [Will the Maps API work over SSL (HTTPS)?](http://code.google.com/support/bin/answer.py?answer=65301&topic=10945)\n> \n> \n> At this time, the Maps API is not\n> available over a secure (SSL)\n> connection. If you are running the\n> Maps API on a secure site, the browser\n> may warn the user about non-secure\n> objects on the screen.\n> \n> \n> \n\nWe have considered the following options\n\n1. Splitting the page so that credit card collection (the requirement for SSL) is not on the same page as the Google Map.\n2. Switching to another map provider, such as Virtual Earth. Rumor has it that they support SSL.\n3. Playing tricks with IFRAMEs. Sounds kludgy.\n4. Proxying the calls to Google. Sounds like a lot of overhead.\n\nAre there other options, or does anyone have insight into the options that we have considered?"},"output":{"kind":"string","value":"I'd agree with the previous two answers that in this instance it may be better from a usability perspective to split the two functions into separate screens. You really want your users to be focussed on entering complete and accurate credit card information, and having a map on the same screen may be distracting.\n\nFor the record though, Virtual Earth certainly does fully support SSL. To enable it you simple need to change the script reference from http:// to https:// and append &s=1 to the URL, e.g.\n\n```\n\n\n```\n\nbecomes\n\n```\n\n\n```"}}},{"rowIdx":385,"cells":{"qid":{"kind":"number","value":3470,"string":"3,470"},"question":{"kind":"string","value":"
I have a very simple problem which requires a very quick and simple solution in SQL Server 2005.
\n\n
I have a table with x Columns. I want to be able to select one row from the table and then transform the columns into rows.
\n\n
TableA\nColumn1, Column2, Column3\n\n\n
SQL Statement to ruturn
\n\n
ResultA\nValue of Column1\nValue of Column2\nValue of Column3\n\n\n
\n\n
@Kevin: I've had a google search on the topic but alot of the example where overly complex for my example, are you able to help further?
\n\n
@Mario: The solution I am creating has 10 columns which stores the values 0 to 6 and I must work out how many columns have the value 3 or more. So I thought about creating a query to turn that into rows and then using the generated table in a subquery to say count the number of rows with Column >= 3
\n"},"answers":{"kind":"list like","value":[{"answer_id":3473,"author":"Kev","author_id":419,"author_profile":"https://Stackoverflow.com/users/419","pm_score":4,"selected":true,"text":"
You should take a look at the UNPIVOT clause.
\n\n
Update1: GateKiller, strangely enough I read an article (about something unrelated) about it this morning and I'm trying to jog my memory where I saw it again, had some decent looking examples too. It'll come back to me I'm sure.
\n\n
Update2: Found it: http://weblogs.sqlteam.com/jeffs/archive/2008/04/23/unpivot.aspx
\n"},{"answer_id":3475,"author":"Michael Stum","author_id":91,"author_profile":"https://Stackoverflow.com/users/91","pm_score":0,"selected":false,"text":"
UNION should be your friend:
\n\n
SELECT Column1 FROM table WHERE idColumn = 1\nUNION ALL\nSELECT Column2 FROM table WHERE idColumn = 1\nUNION ALL\nSELECT Column3 FROM table WHERE idColumn = 1\n\n\n
but it can also be your foe on large result sets.
\n"},{"answer_id":3478,"author":"Joseph Daigle","author_id":507,"author_profile":"https://Stackoverflow.com/users/507","pm_score":0,"selected":false,"text":"
If you have a fixed set of columns and you know what they are, you can basically do a series of subselects
\n\n
(SELECT Column1 AS ResultA FROM TableA) as R1
\n\n
and join the subselects. All this in a single query.
\n"},{"answer_id":3513,"author":"Mat","author_id":48,"author_profile":"https://Stackoverflow.com/users/48","pm_score":0,"selected":false,"text":"
I'm not sure of the SQL Server syntax for this but in MySQL I would do
\n\n
SELECT IDColumn, ( IF( Column1 >= 3, 1, 0 ) + IF( Column2 >= 3, 1, 0 ) + IF( Column3 >= 3, 1, 0 ) + ... [snip ] )\n AS NumberOfColumnsGreaterThanThree\nFROM TableA;\n\n\n
EDIT: A very (very) brief Google search tells me that the CASE statement does what I am doing with the IF statement in MySQL. You may or may not get use out of the Google result I found
\n\n
FURTHER EDIT: I should also point out that this isn't an answer to your question but an alternative solution to your actual problem.
\n"},{"answer_id":3533,"author":"Shawn","author_id":26,"author_profile":"https://Stackoverflow.com/users/26","pm_score":1,"selected":false,"text":"
I had to do this for a project before. One of the major difficulties I had was explaining what I was trying to do to other people. I spent a ton of time trying to do this in SQL, but I found the pivot function woefully inadequate. I do not remember the exact reason why it was, but it is too simplistic for most applications, and it isn't full implemented in MS SQL 2000. I wound up writing a pivot function in .NET. I'll post it here in hopes it helps someone, someday.
\n\n
''' <summary>\n ''' Pivots a data table from rows to columns\n ''' </summary>\n ''' <param name=\"dtOriginal\">The data table to be transformed</param>\n ''' <param name=\"strKeyColumn\">The name of the column that identifies each row</param>\n ''' <param name=\"strNameColumn\">The name of the column with the values to be transformed from rows to columns</param>\n ''' <param name=\"strValueColumn\">The name of the column with the values to pivot into the new columns</param>\n ''' <returns>The transformed data table</returns>\n ''' <remarks></remarks>\n Public Shared Function PivotTable(ByVal dtOriginal As DataTable, ByVal strKeyColumn As String, ByVal strNameColumn As String, ByVal strValueColumn As String) As DataTable\n Dim dtReturn As DataTable\n Dim drReturn As DataRow\n Dim strLastKey As String = String.Empty\n Dim blnFirstRow As Boolean = True\n\n ' copy the original data table and remove the name and value columns\n dtReturn = dtOriginal.Clone\n dtReturn.Columns.Remove(strNameColumn)\n dtReturn.Columns.Remove(strValueColumn)\n\n ' create a new row for the new data table\n drReturn = dtReturn.NewRow\n\n ' Fill the new data table with data from the original table\n For Each drOriginal As DataRow In dtOriginal.Rows\n\n ' Determine if a new row needs to be started\n If drOriginal(strKeyColumn).ToString <> strLastKey Then\n\n ' If this is not the first row, the previous row needs to be added to the new data table\n If Not blnFirstRow Then\n dtReturn.Rows.Add(drReturn)\n End If\n\n blnFirstRow = False\n drReturn = dtReturn.NewRow\n\n ' Add all non-pivot column values to the new row\n For Each dcOriginal As DataColumn In dtOriginal.Columns\n If dcOriginal.ColumnName <> strNameColumn AndAlso dcOriginal.ColumnName <> strValueColumn Then\n drReturn(dcOriginal.ColumnName.ToLower) = drOriginal(dcOriginal.ColumnName.ToLower)\n End If\n Next\n strLastKey = drOriginal(strKeyColumn).ToString\n End If\n\n ' Add new columns if needed and then assign the pivot values to the proper column\n If Not dtReturn.Columns.Contains(drOriginal(strNameColumn).ToString) Then\n dtReturn.Columns.Add(drOriginal(strNameColumn).ToString, drOriginal(strValueColumn).GetType)\n End If\n drReturn(drOriginal(strNameColumn).ToString) = drOriginal(strValueColumn)\n Next\n\n ' Add the final row to the new data table\n dtReturn.Rows.Add(drReturn)\n\n ' Return the transformed data table\n Return dtReturn\n End Function\n\n"},{"answer_id":142124,"author":"Community","author_id":-1,"author_profile":"https://Stackoverflow.com/users/-1","pm_score":0,"selected":false,"text":"
SELECT IDColumn, \n NumberOfColumnsGreaterThanThree = (CASE WHEN Column1 >= 3 THEN 1 ELSE 0 END) + \n (CASE WHEN Column2 >= 3 THEN 1 ELSE 0 END) + \n (Case WHEN Column3 >= 3 THEN 1 ELSE 0 END) \nFROM TableA;\n\n"}],"string":"[\n {\n \"answer_id\": 3473,\n \"author\": \"Kev\",\n \"author_id\": 419,\n \"author_profile\": \"https://Stackoverflow.com/users/419\",\n \"pm_score\": 4,\n \"selected\": true,\n \"text\": \"
You should take a look at the UNPIVOT clause.
\\n\\n
Update1: GateKiller, strangely enough I read an article (about something unrelated) about it this morning and I'm trying to jog my memory where I saw it again, had some decent looking examples too. It'll come back to me I'm sure.
\\n\\n
Update2: Found it: http://weblogs.sqlteam.com/jeffs/archive/2008/04/23/unpivot.aspx
\\n\"\n },\n {\n \"answer_id\": 3475,\n \"author\": \"Michael Stum\",\n \"author_id\": 91,\n \"author_profile\": \"https://Stackoverflow.com/users/91\",\n \"pm_score\": 0,\n \"selected\": false,\n \"text\": \"
UNION should be your friend:
\\n\\n
SELECT Column1 FROM table WHERE idColumn = 1\\nUNION ALL\\nSELECT Column2 FROM table WHERE idColumn = 1\\nUNION ALL\\nSELECT Column3 FROM table WHERE idColumn = 1\\n\\n\\n
but it can also be your foe on large result sets.
\\n\"\n },\n {\n \"answer_id\": 3478,\n \"author\": \"Joseph Daigle\",\n \"author_id\": 507,\n \"author_profile\": \"https://Stackoverflow.com/users/507\",\n \"pm_score\": 0,\n \"selected\": false,\n \"text\": \"
If you have a fixed set of columns and you know what they are, you can basically do a series of subselects
\\n\\n
(SELECT Column1 AS ResultA FROM TableA) as R1
\\n\\n
and join the subselects. All this in a single query.
\\n\"\n },\n {\n \"answer_id\": 3513,\n \"author\": \"Mat\",\n \"author_id\": 48,\n \"author_profile\": \"https://Stackoverflow.com/users/48\",\n \"pm_score\": 0,\n \"selected\": false,\n \"text\": \"
I'm not sure of the SQL Server syntax for this but in MySQL I would do
\\n\\n
SELECT IDColumn, ( IF( Column1 >= 3, 1, 0 ) + IF( Column2 >= 3, 1, 0 ) + IF( Column3 >= 3, 1, 0 ) + ... [snip ] )\\n AS NumberOfColumnsGreaterThanThree\\nFROM TableA;\\n\\n\\n
EDIT: A very (very) brief Google search tells me that the CASE statement does what I am doing with the IF statement in MySQL. You may or may not get use out of the Google result I found
\\n\\n
FURTHER EDIT: I should also point out that this isn't an answer to your question but an alternative solution to your actual problem.
\\n\"\n },\n {\n \"answer_id\": 3533,\n \"author\": \"Shawn\",\n \"author_id\": 26,\n \"author_profile\": \"https://Stackoverflow.com/users/26\",\n \"pm_score\": 1,\n \"selected\": false,\n \"text\": \"
I had to do this for a project before. One of the major difficulties I had was explaining what I was trying to do to other people. I spent a ton of time trying to do this in SQL, but I found the pivot function woefully inadequate. I do not remember the exact reason why it was, but it is too simplistic for most applications, and it isn't full implemented in MS SQL 2000. I wound up writing a pivot function in .NET. I'll post it here in hopes it helps someone, someday.
\\n\\n
''' <summary>\\n ''' Pivots a data table from rows to columns\\n ''' </summary>\\n ''' <param name=\\\"dtOriginal\\\">The data table to be transformed</param>\\n ''' <param name=\\\"strKeyColumn\\\">The name of the column that identifies each row</param>\\n ''' <param name=\\\"strNameColumn\\\">The name of the column with the values to be transformed from rows to columns</param>\\n ''' <param name=\\\"strValueColumn\\\">The name of the column with the values to pivot into the new columns</param>\\n ''' <returns>The transformed data table</returns>\\n ''' <remarks></remarks>\\n Public Shared Function PivotTable(ByVal dtOriginal As DataTable, ByVal strKeyColumn As String, ByVal strNameColumn As String, ByVal strValueColumn As String) As DataTable\\n Dim dtReturn As DataTable\\n Dim drReturn As DataRow\\n Dim strLastKey As String = String.Empty\\n Dim blnFirstRow As Boolean = True\\n\\n ' copy the original data table and remove the name and value columns\\n dtReturn = dtOriginal.Clone\\n dtReturn.Columns.Remove(strNameColumn)\\n dtReturn.Columns.Remove(strValueColumn)\\n\\n ' create a new row for the new data table\\n drReturn = dtReturn.NewRow\\n\\n ' Fill the new data table with data from the original table\\n For Each drOriginal As DataRow In dtOriginal.Rows\\n\\n ' Determine if a new row needs to be started\\n If drOriginal(strKeyColumn).ToString <> strLastKey Then\\n\\n ' If this is not the first row, the previous row needs to be added to the new data table\\n If Not blnFirstRow Then\\n dtReturn.Rows.Add(drReturn)\\n End If\\n\\n blnFirstRow = False\\n drReturn = dtReturn.NewRow\\n\\n ' Add all non-pivot column values to the new row\\n For Each dcOriginal As DataColumn In dtOriginal.Columns\\n If dcOriginal.ColumnName <> strNameColumn AndAlso dcOriginal.ColumnName <> strValueColumn Then\\n drReturn(dcOriginal.ColumnName.ToLower) = drOriginal(dcOriginal.ColumnName.ToLower)\\n End If\\n Next\\n strLastKey = drOriginal(strKeyColumn).ToString\\n End If\\n\\n ' Add new columns if needed and then assign the pivot values to the proper column\\n If Not dtReturn.Columns.Contains(drOriginal(strNameColumn).ToString) Then\\n dtReturn.Columns.Add(drOriginal(strNameColumn).ToString, drOriginal(strValueColumn).GetType)\\n End If\\n drReturn(drOriginal(strNameColumn).ToString) = drOriginal(strValueColumn)\\n Next\\n\\n ' Add the final row to the new data table\\n dtReturn.Rows.Add(drReturn)\\n\\n ' Return the transformed data table\\n Return dtReturn\\n End Function\\n\\n\"\n },\n {\n \"answer_id\": 142124,\n \"author\": \"Community\",\n \"author_id\": -1,\n \"author_profile\": \"https://Stackoverflow.com/users/-1\",\n \"pm_score\": 0,\n \"selected\": false,\n \"text\": \"
SELECT IDColumn, \\n NumberOfColumnsGreaterThanThree = (CASE WHEN Column1 >= 3 THEN 1 ELSE 0 END) + \\n (CASE WHEN Column2 >= 3 THEN 1 ELSE 0 END) + \\n (Case WHEN Column3 >= 3 THEN 1 ELSE 0 END) \\nFROM TableA;\\n\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/06"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/3470","https://Stackoverflow.com","https://Stackoverflow.com/users/383/"],"string":"[\n \"https://Stackoverflow.com/questions/3470\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/383/\"\n]"},"input":{"kind":"string","value":"I have a very simple problem which requires a very quick and simple solution in SQL Server 2005.\n\nI have a table with x Columns. I want to be able to select one row from the table and then transform the columns into rows.\n\n```\nTableA\nColumn1, Column2, Column3\n\n```\n\nSQL Statement to ruturn\n\n```\nResultA\nValue of Column1\nValue of Column2\nValue of Column3\n\n```\n\n---\n\n**@Kevin:** I've had a google search on the topic but alot of the example where overly complex for my example, **are you able to help further?**\n\n@Mario: The solution I am creating has 10 columns which stores the values 0 to 6 and I must work out how many columns have the value 3 or more. So I thought about creating a query to turn that into rows and then using the generated table in a subquery to say count the number of rows with Column >= 3"},"output":{"kind":"string","value":"You should take a look at the UNPIVOT clause.\n\n**Update1**: GateKiller, strangely enough I read an article (about something unrelated) about it this morning and I'm trying to jog my memory where I saw it again, had some decent looking examples too. It'll come back to me I'm sure.\n\n**Update2**: Found it: I have control over the HttpServer but not over the ApplicationServer or the Java Applications sitting there but I need to block direct access to certain pages on those applications. Precisely, I don't want users automating access to forms issuing direct GET/POST HTTP requests to the appropriate servlet. So, I decided to block users based on the value of I implemented a rewrite rule in the .htaccess file that says: I expected to forbid access to users that didn't navigate the site but issue direct GET requests to the \"servlet1\" or \"servlet2\" servlets using querystrings. But my expectations ended abruptly because the regular expression I was really disappointed when I changed that expression to So, my question is: How do I can accomplish this thing of not allowing \"robots\" with direct access to certain pages if I have no access/privileges/time to modify the application? I don't have a solution, but I'm betting that relying on the referrer will never work because user-agents are free to not send it at all or spoof it to something that will let them in. I'm guessing you're trying to prevent screen scraping? In my honest opinion it's a tough one to solve and trying to fix by checking the value of HTTP_REFERER is just a sticking plaster. Anyone going to the bother of automating submissions is going to be savvy enough to send the correct referer from their 'automaton'. You could try rate limiting but without actually modifying the app to force some kind of is-this-a-human validation (a CAPTCHA) at some point then you're going to find this hard to prevent. You can't tell apart users and malicious scripts by their http request. But you can analyze which users are requesting too many pages in too short a time, and block their ip-addresses. Javascript is another helpful tool to prevent (or at least delay) screen scraping. Most automated scraping tools don't have a Javascript interpreter, so you can do things like setting hidden fields, etc. Edit: Something along the lines of this Phil Haack article. Using a referrer is very unreliable as a method of verification. As other people have mentioned, it is easily spoofed. Your best solution is to modify the application (if you can) You could use a CAPTCHA, or set some sort of cookie or session cookie that keeps track of what page the user last visited (a session would be harder to spoof) and keep track of page view history, and only allow users who have browsed the pages required to get to the page you want to block. This obviously requires you to have access to the application in question, however it is the most foolproof way (not completely, but \"good enough\" in my opinion.) If you're trying to prevent search engine bots from accessing certain pages, make sure you're using a properly formatted robots.txt file. Using HTTP_REFERER is unreliable because it is easily faked. Another option is to check the user agent string for known bots (this may require code modification). To make the things a little more clear: Yes, I know that using HTTP_REFERER is completely unreliable and somewhat childish but I'm pretty sure that the people that learned (from me maybe?) to make automations with Excel VBA will not know how to subvert a HTTP_REFERER within the time span to have the final solution. I don't have access/privilege to modify the application code. Politics. Do you believe that? So, I must to wait until the rights holder make the changes I requested. From previous experiences, I know that the requested changes will take two month to get in Production. No, tossing them Agile Methodologies Books in their heads didn't improve anything. This is an intranet app. So I don't have a lot of youngsters trying to undermine my prestige. But I'm young enough as to try to undermine the prestige of \"a very fancy global consultancy services that comes from India\" but where, curiously, there are not a single indian working there. So far, the best answer comes from \"Michel de Mare\": block users based on their IPs. Well, that I did yesterday. Today I wanted to make something more generic because I have a lot of kangaroo users (jumping from an Ip address to another) because they use VPN or DHCP. I'm not sure if I can solve this in one go, but we can go back and forth as necessary. First, I want to repeat what I think you are saying and make sure I'm clear. You want to disallow requests to servlet1 and servlet2 is the request doesn't have the proper referer and it does have a query string? I'm not sure I understand (servlet1|servlet2)/.+\\?.+ because it looks like you are requiring a file under servlet1 and 2. I think maybe you are combining PATH_INFO (before the \"?\") with a GET query string (after the \"?\"). It appears that the PATH_INFO part will work but the GET query test will not. I made a quick test on my server using script1.cgi and script2.cgi and the following rules worked to accomplish what you are asking for. They are obviously edited a little to match my environment: The above caught all wrong-referer requests to script1.cgi and script2.cgi that tried to submit data using a query string. However, you can also submit data using a path_info and by posting data. I used this form to protect against any of the three methods being used with incorrect referer: Based on the example you were trying to get working, I think this is what you want: Hopefully this at least gets you closer to your goal. Please let us know how it works, I'm interested in your problem. (BTW, I agree that referer blocking is poor security, but I also understand that relaity forces imperfect and partial solutions sometimes, which you seem to already acknowledge.) You might be able to use an anti-CSRF token to achieve what you're after. This article explains it in more detail: Cross-Site Request Forgeries I don't have a solution, but I'm betting that relying on the referrer will never work because user-agents are free to not send it at all or spoof it to something that will let them in. I'm guessing you're trying to prevent screen scraping? In my honest opinion it's a tough one to solve and trying to fix by checking the value of HTTP_REFERER is just a sticking plaster. Anyone going to the bother of automating submissions is going to be savvy enough to send the correct referer from their 'automaton'. You could try rate limiting but without actually modifying the app to force some kind of is-this-a-human validation (a CAPTCHA) at some point then you're going to find this hard to prevent. You can't tell apart users and malicious scripts by their http request. But you can analyze which users are requesting too many pages in too short a time, and block their ip-addresses. Javascript is another helpful tool to prevent (or at least delay) screen scraping. Most automated scraping tools don't have a Javascript interpreter, so you can do things like setting hidden fields, etc. Edit: Something along the lines of this Phil Haack article. Using a referrer is very unreliable as a method of verification. As other people have mentioned, it is easily spoofed. Your best solution is to modify the application (if you can) You could use a CAPTCHA, or set some sort of cookie or session cookie that keeps track of what page the user last visited (a session would be harder to spoof) and keep track of page view history, and only allow users who have browsed the pages required to get to the page you want to block. This obviously requires you to have access to the application in question, however it is the most foolproof way (not completely, but \\\"good enough\\\" in my opinion.) If you're trying to prevent search engine bots from accessing certain pages, make sure you're using a properly formatted robots.txt file. Using HTTP_REFERER is unreliable because it is easily faked. Another option is to check the user agent string for known bots (this may require code modification). To make the things a little more clear: Yes, I know that using HTTP_REFERER is completely unreliable and somewhat childish but I'm pretty sure that the people that learned (from me maybe?) to make automations with Excel VBA will not know how to subvert a HTTP_REFERER within the time span to have the final solution. I don't have access/privilege to modify the application code. Politics. Do you believe that? So, I must to wait until the rights holder make the changes I requested. From previous experiences, I know that the requested changes will take two month to get in Production. No, tossing them Agile Methodologies Books in their heads didn't improve anything. This is an intranet app. So I don't have a lot of youngsters trying to undermine my prestige. But I'm young enough as to try to undermine the prestige of \\\"a very fancy global consultancy services that comes from India\\\" but where, curiously, there are not a single indian working there. So far, the best answer comes from \\\"Michel de Mare\\\": block users based on their IPs. Well, that I did yesterday. Today I wanted to make something more generic because I have a lot of kangaroo users (jumping from an Ip address to another) because they use VPN or DHCP. I'm not sure if I can solve this in one go, but we can go back and forth as necessary. First, I want to repeat what I think you are saying and make sure I'm clear. You want to disallow requests to servlet1 and servlet2 is the request doesn't have the proper referer and it does have a query string? I'm not sure I understand (servlet1|servlet2)/.+\\\\?.+ because it looks like you are requiring a file under servlet1 and 2. I think maybe you are combining PATH_INFO (before the \\\"?\\\") with a GET query string (after the \\\"?\\\"). It appears that the PATH_INFO part will work but the GET query test will not. I made a quick test on my server using script1.cgi and script2.cgi and the following rules worked to accomplish what you are asking for. They are obviously edited a little to match my environment: The above caught all wrong-referer requests to script1.cgi and script2.cgi that tried to submit data using a query string. However, you can also submit data using a path_info and by posting data. I used this form to protect against any of the three methods being used with incorrect referer: Based on the example you were trying to get working, I think this is what you want: Hopefully this at least gets you closer to your goal. Please let us know how it works, I'm interested in your problem. (BTW, I agree that referer blocking is poor security, but I also understand that relaity forces imperfect and partial solutions sometimes, which you seem to already acknowledge.) You might be able to use an anti-CSRF token to achieve what you're after. This article explains it in more detail: Cross-Site Request Forgeries What is BODMAS and why is it useful in programming? http://www.easymaths.com/What_on_earth_is_Bodmas.htm: What do you think the answer to 2 + 3 x 5 is? Is it (2 + 3) x 5 = 5 x 5 = 25 ? or 2 + (3 x 5) = 2 + 15 = 17 ? BODMAS can come to the rescue and give us rules to follow so that we always get the right answer: (B)rackets (O)rder (D)ivision (M)ultiplication (A)ddition (S)ubtraction According to BODMAS, multiplication should always be done before addition, therefore 17 is actually the correct answer according to BODMAS and will also be the answer which your calculator will give if you type in 2 + 3 x 5 . Why it is useful in programming? No idea, but i assume it's because you can get rid of some brackets? I am a quite defensive programmer, so my lines can look like this: with BODMAS you can make this a bit clearer: I think i'd still use the first variant - more brackets, but that way i do not have to learn yet another rule and i run into less risk of forgetting it and causing those weird hard to debug errors? Just guessing at that part though. Mike Stone EDIT: Fixed math as Gaius points out Another version of this (in middle school) was \"Please Excuse My Dear Aunt Sally\". The mnemonic device was helpful in school, and still useful in programming today. Order of operations in an expression, such as: source: http://www.mathsisfun.com/operation-order-bodmas.html When I learned this in grade school (in Canada) it was referred to as BEDMAS: Brackets Just for those from this part of the world... I'm not really sure how applicable to programming the old BODMAS mnemonic is anyways. There is no guarantee on order of operations between languages, and while many keep the standard operations in that order, not all do. And then there are some languages where order of operations isn't really all that meaningful (Lisp dialects, for example). In a way, you're probably better off for programming if you forget the standard order and either use parentheses for everything(eg (a*b) + c) or specifically learn the order for each language you work in. I don't have the power to edit @Michael Stum's answer, but it's not quite correct. He reduces to They are not equivalent. The best reduction I can get for the whole expression is or I read somewhere that especially in C/C++ splitting your expressions into small statements was better for optimisation; so instead of writing hugely complex expressions in one line, you cache the parts into variables and do each one in steps, then build them up as you go along. The optimisation routines will use registers in places where you had variables so it shouldn't impact space but it can help the compiler a little. http://www.easymaths.com/What_on_earth_is_Bodmas.htm: What do you think the answer to 2 + 3 x 5 is? Is it (2 + 3) x 5 = 5 x 5 = 25 ? or 2 + (3 x 5) = 2 + 15 = 17 ? BODMAS can come to the rescue and give us rules to follow so that we always get the right answer: (B)rackets (O)rder (D)ivision (M)ultiplication (A)ddition (S)ubtraction According to BODMAS, multiplication should always be done before addition, therefore 17 is actually the correct answer according to BODMAS and will also be the answer which your calculator will give if you type in 2 + 3 x 5 . Why it is useful in programming? No idea, but i assume it's because you can get rid of some brackets? I am a quite defensive programmer, so my lines can look like this: with BODMAS you can make this a bit clearer: I think i'd still use the first variant - more brackets, but that way i do not have to learn yet another rule and i run into less risk of forgetting it and causing those weird hard to debug errors? Just guessing at that part though. Mike Stone EDIT: Fixed math as Gaius points out Another version of this (in middle school) was \\\"Please Excuse My Dear Aunt Sally\\\". The mnemonic device was helpful in school, and still useful in programming today. Order of operations in an expression, such as: source: http://www.mathsisfun.com/operation-order-bodmas.html When I learned this in grade school (in Canada) it was referred to as BEDMAS: Brackets Just for those from this part of the world... I'm not really sure how applicable to programming the old BODMAS mnemonic is anyways. There is no guarantee on order of operations between languages, and while many keep the standard operations in that order, not all do. And then there are some languages where order of operations isn't really all that meaningful (Lisp dialects, for example). In a way, you're probably better off for programming if you forget the standard order and either use parentheses for everything(eg (a*b) + c) or specifically learn the order for each language you work in. I don't have the power to edit @Michael Stum's answer, but it's not quite correct. He reduces to They are not equivalent. The best reduction I can get for the whole expression is or I read somewhere that especially in C/C++ splitting your expressions into small statements was better for optimisation; so instead of writing hugely complex expressions in one line, you cache the parts into variables and do each one in steps, then build them up as you go along. The optimisation routines will use registers in places where you had variables so it shouldn't impact space but it can help the compiler a little. I have a I'd like to write a Ruby script that calls that Rake task multiple times, but the only solution I see is shelling out using `` (backticks) or What's the right way to do this? from timocracy.com: You can use Your example call Example: Please replace This solution will write the result to stdout - but you did not mention, that you want to suppress output. Interesting experiment: You can call the Example: The result (tested with rake 10.4.2): This works with Rake version 10.0.3: As knut said, use In a script with Rails loaded (e.g. from timocracy.com: You can use Your example call Example: Please replace This solution will write the result to stdout - but you did not mention, that you want to suppress output. Interesting experiment: You can call the Example: The result (tested with rake 10.4.2): This works with Rake version 10.0.3: As knut said, use In a script with Rails loaded (e.g. In SQL Server how do you query a database to bring back all the tables that have a field of a specific name? I'm old-school: The following query will bring back a unique list of tables where I'm old-school: The following query will bring back a unique list of tables where I want to create my Rails application with MySQL, because I like it so much. How can I do that in the latest version of Rails instead of the default SQLite? Normally, you would create a new Rails app using To use MySQL, use If you are creating a new rails application you can set the database using the -d switch like this: Its always easy to switch your database later though, and using sqlite really is easier if you are developing on a Mac. If you already have a rails project, change the adapter in the Next, make sure you edit your Gemfile to include the mysql2 or activerecord-jdbcmysql-adapter (if using jruby). For Rails 3 you can use this command to create a new project using mysql: In Rails 3, you could do If you are using rails 3 or greater version if you have earlier version So before you create your project you need to find the rails version. that you can find by is always your best friend usage: also note that options should be given after the application name rails and mysql rails and postgresql You should use the switch -D instead of -d because it will generate two apps and mysql with no documentation folders. Alternatively you just use the OR Changes in config/database.yml Go to the terminal and write: Create application with -d option If you have not created your app yet, just go to cmd(for windows) or terminal(for linux/unix) and type the following command to create a rails application with mysql database: It works for anything above rails version 3. If you have already created your app, then you can do one of the 2 following things: OR development: Moreover, remove gem 'sqlite3' from your Gemfile and add the gem 'mysql2' Just go to rails console and type: First make sure that mysql gem is installed, if not? than type following command in your console Than create new rails app and set mysql database as default database by typing following command in your console On new project, easy peasy: On existing project, definitely trickier. This has given me a number of issues on existing rails projects. This kind of works with me: Use following command to create new app for API with mysql database database.yml Gemfile: you first should make sure that MySQL driver is on your system if not run this on your terminal if you are using Ubuntu or any Debian distro and add this to your Gemfile then run in your root directory of the project after that you can add the mysql config to config/database.yml as the previous answers Normally, you would create a new Rails app using To use MySQL, use If you are creating a new rails application you can set the database using the -d switch like this: Its always easy to switch your database later though, and using sqlite really is easier if you are developing on a Mac. If you already have a rails project, change the adapter in the Next, make sure you edit your Gemfile to include the mysql2 or activerecord-jdbcmysql-adapter (if using jruby). For Rails 3 you can use this command to create a new project using mysql: In Rails 3, you could do If you are using rails 3 or greater version if you have earlier version So before you create your project you need to find the rails version. that you can find by is always your best friend usage: also note that options should be given after the application name rails and mysql rails and postgresql You should use the switch -D instead of -d because it will generate two apps and mysql with no documentation folders. Alternatively you just use the OR Changes in config/database.yml Go to the terminal and write: Create application with -d option If you have not created your app yet, just go to cmd(for windows) or terminal(for linux/unix) and type the following command to create a rails application with mysql database: It works for anything above rails version 3. If you have already created your app, then you can do one of the 2 following things: OR development: Moreover, remove gem 'sqlite3' from your Gemfile and add the gem 'mysql2' Just go to rails console and type: First make sure that mysql gem is installed, if not? than type following command in your console Than create new rails app and set mysql database as default database by typing following command in your console On new project, easy peasy: On existing project, definitely trickier. This has given me a number of issues on existing rails projects. This kind of works with me: Use following command to create new app for API with mysql database database.yml Gemfile: you first should make sure that MySQL driver is on your system if not run this on your terminal if you are using Ubuntu or any Debian distro and add this to your Gemfile then run in your root directory of the project after that you can add the mysql config to config/database.yml as the previous answers I've got TotroiseSVN installed and have a majority of my repositories checking in and out from C:\\subversion\\ and a couple checking in and out from a network share (I forgot about this when I originally posted this question). This means that I don't have a \"subversion\" server per-se. How do I integrate TortoiseSVN and Fogbugz? Edit: inserted italics This answer is incomplete and flawed! It only works from TortoisSVN to Fogbugz, but not the other way around. I still need to know how to get it to work backwards from Fogbugz (like it's designed to) so that I can see the Revision number a bug is addressed in from Fogbugz while looking at a bug. http://tortoisesvn.net/docs/release/TortoiseSVN_en/tsvn-dug-propertypage.html http://tortoisesvn.net/issuetracker_integration Go into your fogbugz account and click Extras > Configure Source Control Integration Download \"post-commit.bat\" and the VBScript file for Subversion Create a \"hooks\" directory in a common easily accessed location (preferably with no spaces in the file path) Place a copy of the files in the hooks directories Rename the files without the \".safe\" extension Right click on any directory. Select \"TortoiseSVN > Settings\" (in the right click menu from the last step) Select \"Hook Scripts\" Click \"Add\" Set the properties thus: Hook Type: Post-Commit Hook Working Copy Path: C:\\\\Projects (or whatever your root directory for all of your projects is. If you have multiple you will need to do this step for each one.) Command Line To Execute: C:\\\\subversion\\\\hooks\\\\post-commit.bat (this needs to point to wherever you put your hooks directory from step 3) I also selected the checkbox to Wait for the script to finish... WARNING: Don't forget the double back-slash! \"\\\\\" Click OK... Note: the screenshot is different, follow the text for the file paths, NOT the screenshot... At this point it would seem you could click \"Issue Tracker Integration\" and select Fogbugz. nope. It just returns \"There are no issue-tracker providers available\". Once again, Right click on the root directory of the checked out\nproject you want to work with (you need to do this \"configure the properties\" step for each project -- See \"Migrating Properties Between Projects\" below) Select \"TortoiseSVN > Properties\" (in the right click menu\nfrom the last step) Add five property value pairs by clicking \"New...\" and inserting the\nfollowing in \"Property Name\" and\n\"Property Value\" respectively: bugtraq:label BugzID: bugtraq:number true bugtraq:url http://[your fogbugz URL\n here]/default.asp?%BUGID% bugtraq:warnifnoissue false Now when you are commiting, you can specify one bug that the commit addresses. This kind of forces you to commit after fixing each bug... When you view the log (Right click root of project, TortoiseSVN > show log) you can see the bug id that each checking corresponds to (1), and you can click the bug id number to be taken to fogbugz to view that bug automatically if you are looking at the actual log message. Pretty nifty! Right click on a project that already has the proper Properties configuration Select \"TortoiseSVN > Properties\" (from the right-click menu from step 1) Highlight all of the desired properties Click \"Export\" Name the file after the property, and place in an easily accessible directory (I placed mine with the hooks files) Right click on the root directory of the checked out project needing properties set for. Click \"Import\" Select the file you exported in step 4 above Click Open Why can't you simply install a subversion server? If you download VisualSVN Server, which is free, you get a http server for your source code and can thus use the FogBugz scripts for integrating the two. The reason I'm asking is because all scripts and documentation so far assumes you have the server, client-side scripts are too new for FogBugz to have templates for them so you're pretty much left to your own devices on that. I am not sure I follow you. Do you have the repositories on the network or on your C:\\ drive? According to two of your posts, you have both, or neither, or one of them or... You can not get VisualSVN or Apache to safely serve repositories from a network share. Since you originally said you had the repositories on your C:\\ drive, that's what you get advice for. If you have a different setup, you need to tell us about that. If you have the repositories on your local harddisk, I would install VisualSVN, or integrate it into Apache. VisualSVN can run fine alongside Apache so if you go that route you only have to install it. Your existing repositories can also just be copied into the repository root directory of VisualSVN and you're up and running. I am unsure why that big post here is labelled as incomplete, as it details the steps necessary to set up a hook script to inform FogBugz about the new revisions linked to the cases, which should be what the incomplete message says it doesn't do. Is that not working? The problem is that FogBugz will link to a web page, and file:///etc is not a web page. To get integration two ways, you need a web server for your subversion repository. Either set up Apache or something else that can host those things the proper way. I've been investigating this issue and have managed to get it working. There are a couple of minor problems but they can be worked-around. There are 3 distinct parts to this problem, as follows: The TortoiseSVN part - getting TortoiseSVN to insert the Bugid and hyperlink in the svn log The FogBugz part - getting FogBugz to insert the SVN info and corresponding links The WebSVN part - ensuring the links from FogBugz actually work Instructions for part 1 are in another answer, although it actually does more than required. The stuff about the hooks is actually for part 2, and as is pointed out - it doesn't work \"out of the box\" Just to confirm, we are looking at using TortoiseSVN WITHOUT an SVN server (ie. file-based repositories) I'm accessing the repositories using UNC paths, but it also works for local drives or mapped drives. All of this works with TortoiseSVN v1.5.3 and SVN Server v1.5.2 (You need to install SVN Server because part 2 needs Creating the TortoiseSVN properties is all that is required in order to get the links in the SVN log. Previous instructions work fine, I'll quote them here for convenience: Right click on the root directory of the checked out project you want to work with. Select \"TortoiseSVN -> Properties\" Add five property value pairs by clicking \"New...\" and inserting the following in \"Property Name\" and \"Property Value\" respectively: (make sure you tick \"Apply property recursively\" for each one) Click \"OK\" As Jeff says, you'll need to do that for each working copy, so follow his instructions for migrating the properties. That's it. TortoiseSVN will now add a link to the corresponding FogBugz bugID when you commit. If that's all you want, you can stop here. For this to work we need to set up the hook scripts. Basically the batch file is called after each commit, and this in turn calls the VBS script which does the submission to FogBugz. The VBS script actually works fine in this situation so we don't need to modify it. The problem is that the batch file is written to work as a server hook, but we need a client hook. SVN server calls the post-commit hook with these parameters: TortoiseSVN calls the post-commit hook with these parameters: So that's why it doesn't work - the parameters are wrong. We need to amend the batch file so it passes the correct parameters to the VBS script. You'll notice that TSVN doesn't pass the repository path, which is a problem, but it does work in the following circumstances: I'm going to see if I can fix this problem and will post back here if I do. Here's my amended batch file which does work (please excuse the excessive comments...) You'll need to set the hook and repository directories to match your setup. I'm going to assume the repositories are at Go into your FogBugz account and click Extras -> Configure Source Control Integration Download the VBScript file for Subversion (don't bother with the batch file) Create a folder to store the hook scripts. I put it in the same folder as my repositories. eg. Rename VBscript to remove the Save my version of the batch file in your hooks directory, as Right click on any directory. Select \"TortoiseSVN > Settings\" (in the right click menu from the last step) Select \"Hook Scripts\" Click \"Add\" and set the properties as follows: Hook Type: Post-Commit Hook Working Copy Path: Command Line To Execute: Tick \"Wait for the script to finish\" Click OK twice. Next time you commit and enter a Bugid, it will be submitted to FogBugz. The links won't work but at least the revision info is there and you can manually look up the log in TortoiseSVN. NOTE: You'll notice that the repository root is hard-coded into the batch file. As a result, if you check out from repositories that don't have the same root (eg. one on local drive and one on network) then you'll need to use 2 batch files and 2 corresponding entries under Hook Scripts in the TSVN settings. The way to do this would be to have 2 separate Working Copy trees - one for each repository root. Errr, I haven't done this :-) From reading the WebSVN docs, it seems that WebSVN doesn't actually integrate with the SVN server, it just behaves like any other SVN client but presents a web interface. In theory then it should work fine with a file-based repository. I haven't tried it though. This answer is incomplete and flawed! It only works from TortoisSVN to Fogbugz, but not the other way around. I still need to know how to get it to work backwards from Fogbugz (like it's designed to) so that I can see the Revision number a bug is addressed in from Fogbugz while looking at a bug. http://tortoisesvn.net/docs/release/TortoiseSVN_en/tsvn-dug-propertypage.html http://tortoisesvn.net/issuetracker_integration Go into your fogbugz account and click Extras > Configure Source Control Integration Download \\\"post-commit.bat\\\" and the VBScript file for Subversion Create a \\\"hooks\\\" directory in a common easily accessed location (preferably with no spaces in the file path) Place a copy of the files in the hooks directories Rename the files without the \\\".safe\\\" extension Right click on any directory. Select \\\"TortoiseSVN > Settings\\\" (in the right click menu from the last step) Select \\\"Hook Scripts\\\" Click \\\"Add\\\" Set the properties thus: Hook Type: Post-Commit Hook Working Copy Path: C:\\\\\\\\Projects (or whatever your root directory for all of your projects is. If you have multiple you will need to do this step for each one.) Command Line To Execute: C:\\\\\\\\subversion\\\\\\\\hooks\\\\\\\\post-commit.bat (this needs to point to wherever you put your hooks directory from step 3) I also selected the checkbox to Wait for the script to finish... WARNING: Don't forget the double back-slash! \\\"\\\\\\\\\\\" Click OK... Note: the screenshot is different, follow the text for the file paths, NOT the screenshot... At this point it would seem you could click \\\"Issue Tracker Integration\\\" and select Fogbugz. nope. It just returns \\\"There are no issue-tracker providers available\\\". Once again, Right click on the root directory of the checked out\\nproject you want to work with (you need to do this \\\"configure the properties\\\" step for each project -- See \\\"Migrating Properties Between Projects\\\" below) Select \\\"TortoiseSVN > Properties\\\" (in the right click menu\\nfrom the last step) Add five property value pairs by clicking \\\"New...\\\" and inserting the\\nfollowing in \\\"Property Name\\\" and\\n\\\"Property Value\\\" respectively: bugtraq:label BugzID: bugtraq:number true bugtraq:url http://[your fogbugz URL\\n here]/default.asp?%BUGID% bugtraq:warnifnoissue false Now when you are commiting, you can specify one bug that the commit addresses. This kind of forces you to commit after fixing each bug... When you view the log (Right click root of project, TortoiseSVN > show log) you can see the bug id that each checking corresponds to (1), and you can click the bug id number to be taken to fogbugz to view that bug automatically if you are looking at the actual log message. Pretty nifty! Right click on a project that already has the proper Properties configuration Select \\\"TortoiseSVN > Properties\\\" (from the right-click menu from step 1) Highlight all of the desired properties Click \\\"Export\\\" Name the file after the property, and place in an easily accessible directory (I placed mine with the hooks files) Right click on the root directory of the checked out project needing properties set for. Click \\\"Import\\\" Select the file you exported in step 4 above Click Open Why can't you simply install a subversion server? If you download VisualSVN Server, which is free, you get a http server for your source code and can thus use the FogBugz scripts for integrating the two. The reason I'm asking is because all scripts and documentation so far assumes you have the server, client-side scripts are too new for FogBugz to have templates for them so you're pretty much left to your own devices on that. I am not sure I follow you. Do you have the repositories on the network or on your C:\\\\ drive? According to two of your posts, you have both, or neither, or one of them or... You can not get VisualSVN or Apache to safely serve repositories from a network share. Since you originally said you had the repositories on your C:\\\\ drive, that's what you get advice for. If you have a different setup, you need to tell us about that. If you have the repositories on your local harddisk, I would install VisualSVN, or integrate it into Apache. VisualSVN can run fine alongside Apache so if you go that route you only have to install it. Your existing repositories can also just be copied into the repository root directory of VisualSVN and you're up and running. I am unsure why that big post here is labelled as incomplete, as it details the steps necessary to set up a hook script to inform FogBugz about the new revisions linked to the cases, which should be what the incomplete message says it doesn't do. Is that not working? The problem is that FogBugz will link to a web page, and file:///etc is not a web page. To get integration two ways, you need a web server for your subversion repository. Either set up Apache or something else that can host those things the proper way. I've been investigating this issue and have managed to get it working. There are a couple of minor problems but they can be worked-around. There are 3 distinct parts to this problem, as follows: The TortoiseSVN part - getting TortoiseSVN to insert the Bugid and hyperlink in the svn log The FogBugz part - getting FogBugz to insert the SVN info and corresponding links The WebSVN part - ensuring the links from FogBugz actually work Instructions for part 1 are in another answer, although it actually does more than required. The stuff about the hooks is actually for part 2, and as is pointed out - it doesn't work \\\"out of the box\\\" Just to confirm, we are looking at using TortoiseSVN WITHOUT an SVN server (ie. file-based repositories) I'm accessing the repositories using UNC paths, but it also works for local drives or mapped drives. All of this works with TortoiseSVN v1.5.3 and SVN Server v1.5.2 (You need to install SVN Server because part 2 needs Creating the TortoiseSVN properties is all that is required in order to get the links in the SVN log. Previous instructions work fine, I'll quote them here for convenience: Right click on the root directory of the checked out project you want to work with. Select \\\"TortoiseSVN -> Properties\\\" Add five property value pairs by clicking \\\"New...\\\" and inserting the following in \\\"Property Name\\\" and \\\"Property Value\\\" respectively: (make sure you tick \\\"Apply property recursively\\\" for each one) Click \\\"OK\\\" As Jeff says, you'll need to do that for each working copy, so follow his instructions for migrating the properties. That's it. TortoiseSVN will now add a link to the corresponding FogBugz bugID when you commit. If that's all you want, you can stop here. For this to work we need to set up the hook scripts. Basically the batch file is called after each commit, and this in turn calls the VBS script which does the submission to FogBugz. The VBS script actually works fine in this situation so we don't need to modify it. The problem is that the batch file is written to work as a server hook, but we need a client hook. SVN server calls the post-commit hook with these parameters: TortoiseSVN calls the post-commit hook with these parameters: So that's why it doesn't work - the parameters are wrong. We need to amend the batch file so it passes the correct parameters to the VBS script. You'll notice that TSVN doesn't pass the repository path, which is a problem, but it does work in the following circumstances: I'm going to see if I can fix this problem and will post back here if I do. Here's my amended batch file which does work (please excuse the excessive comments...) You'll need to set the hook and repository directories to match your setup. I'm going to assume the repositories are at Go into your FogBugz account and click Extras -> Configure Source Control Integration Download the VBScript file for Subversion (don't bother with the batch file) Create a folder to store the hook scripts. I put it in the same folder as my repositories. eg. Rename VBscript to remove the Save my version of the batch file in your hooks directory, as Right click on any directory. Select \\\"TortoiseSVN > Settings\\\" (in the right click menu from the last step) Select \\\"Hook Scripts\\\" Click \\\"Add\\\" and set the properties as follows: Hook Type: Post-Commit Hook Working Copy Path: Command Line To Execute: Tick \\\"Wait for the script to finish\\\" Click OK twice. Next time you commit and enter a Bugid, it will be submitted to FogBugz. The links won't work but at least the revision info is there and you can manually look up the log in TortoiseSVN. NOTE: You'll notice that the repository root is hard-coded into the batch file. As a result, if you check out from repositories that don't have the same root (eg. one on local drive and one on network) then you'll need to use 2 batch files and 2 corresponding entries under Hook Scripts in the TSVN settings. The way to do this would be to have 2 separate Working Copy trees - one for each repository root. Errr, I haven't done this :-) From reading the WebSVN docs, it seems that WebSVN doesn't actually integrate with the SVN server, it just behaves like any other SVN client but presents a web interface. In theory then it should work fine with a file-based repository. I haven't tried it though. I'm trying to write some PHP to upload a file to a folder on my webserver. Here's what I have: I'm getting these errors: Warning: move_uploaded_file(./test.txt) [function.move-uploaded-file]: failed to open stream: Permission denied in E:\\inetpub\\vhosts\\mywebsite.com\\httpdocs\\dump\\upload.php on line 3 Warning: move_uploaded_file() [function.move-uploaded-file]: Unable to move 'C:\\WINDOWS\\Temp\\phpA30E.tmp' to './test.txt' in E:\\inetpub\\vhosts\\mywebsite.com\\httpdocs\\dump\\upload.php on line 3 Warning: Cannot modify header information - headers already sent by (output started at E:\\inetpub\\vhosts\\mywebsite.com\\httpdocs\\dump\\upload.php:3) in E:\\inetpub\\vhosts\\mywebsite.com\\httpdocs\\dump\\upload.php on line 4 PHP version 4.4.7\nRunning IIS on a Windows box. This particular file/folder has 777 permissions. Any ideas? Warning: move_uploaded_file() [function.move-uploaded-file]: Unable to move 'C:\\WINDOWS\\Temp\\phpA30E.tmp' to './people.xml' in E:\\inetpub\\vhosts\\mywebsite.com\\httpdocs\\dump\\upload.php on line 3 is the important line it says you can't put the file where you want it and this normally means a permissions problem check the process running the app (normally the webservers process for php) has the rights to write a file there. EDIT: hang on a bit\nI jumped the gun a little is the path to the file in the first line correct? As it's Windows, there is no real 777. If you're using chmod, check the Windows-related comments. Check that the IIS Account can access (read, write, modify) these two folders: Try adding a path. The following code works for me: OMG Don't do that. Another think to observe is your directory separator, you are using / in a Windows box.. Add the IIS user in the 'dump' folders security persmissions group, and give it read/write access. Create a folder named \"image\" with folder permission We found using below path and giving everyone full access to the folder resolved the issue. Make sure to not reveal the location in the address bar. No sense in giving the location away. Warning: move_uploaded_file() [function.move-uploaded-file]: Unable to move 'C:\\\\WINDOWS\\\\Temp\\\\phpA30E.tmp' to './people.xml' in E:\\\\inetpub\\\\vhosts\\\\mywebsite.com\\\\httpdocs\\\\dump\\\\upload.php on line 3 is the important line it says you can't put the file where you want it and this normally means a permissions problem check the process running the app (normally the webservers process for php) has the rights to write a file there. EDIT: hang on a bit\\nI jumped the gun a little is the path to the file in the first line correct? As it's Windows, there is no real 777. If you're using chmod, check the Windows-related comments. Check that the IIS Account can access (read, write, modify) these two folders: Try adding a path. The following code works for me: OMG Don't do that. Another think to observe is your directory separator, you are using / in a Windows box.. Add the IIS user in the 'dump' folders security persmissions group, and give it read/write access. Create a folder named \\\"image\\\" with folder permission We found using below path and giving everyone full access to the folder resolved the issue. Make sure to not reveal the location in the address bar. No sense in giving the location away.HTTP_REFERER. After all, if the user is navigating inside the site, it will have an appropriate HTTP_REFERER. Well, that was what I thought.
\n\nRewriteEngine on \n\n# Options +FollowSymlinks\nRewriteCond %{HTTP_REFERER} !^http://mywebaddress(.cl)?/.* [NC]\nRewriteRule (servlet1|servlet2)/.+\\?.+ - [F]\n(servlet1|servlet2)/.+\\?.+ didn't worked at all. (servlet1|servlet2)/.+ and it worked so well that my users were blocked no matter if they navigated the site or not. \n
\n\n
\n\nRewriteCond %{HTTP_REFERER} !^http://(www.)?example.(com|org) [NC]\nRewriteCond %{QUERY_STRING} ^.+$\nRewriteRule ^(script1|script2)\\.cgi - [F]\n
\n\nRewriteCond %{HTTP_REFERER} !^http://(www.)?example.(com|org) [NC]\nRewriteCond %{QUERY_STRING} ^.+$ [OR]\nRewriteCond %{REQUEST_METHOD} ^POST$ [OR]\nRewriteCond %{PATH_INFO} ^.+$\nRewriteRule ^(script1|script2)\\.cgi - [F]\n
\n\nRewriteCond %{HTTP_REFERER} !^http://mywebaddress(.cl)?/.* [NC]\nRewriteCond %{QUERY_STRING} ^.+$ [OR]\nRewriteCond %{REQUEST_METHOD} ^POST$ [OR]\nRewriteCond %{PATH_INFO} ^.+$\nRewriteRule (servlet1|servlet2)\\b - [F]\n\\n
\\n\\n
\\n\\nRewriteCond %{HTTP_REFERER} !^http://(www.)?example.(com|org) [NC]\\nRewriteCond %{QUERY_STRING} ^.+$\\nRewriteRule ^(script1|script2)\\\\.cgi - [F]\\n
\\n\\nRewriteCond %{HTTP_REFERER} !^http://(www.)?example.(com|org) [NC]\\nRewriteCond %{QUERY_STRING} ^.+$ [OR]\\nRewriteCond %{REQUEST_METHOD} ^POST$ [OR]\\nRewriteCond %{PATH_INFO} ^.+$\\nRewriteRule ^(script1|script2)\\\\.cgi - [F]\\n
\\n\\nRewriteCond %{HTTP_REFERER} !^http://mywebaddress(.cl)?/.* [NC]\\nRewriteCond %{QUERY_STRING} ^.+$ [OR]\\nRewriteCond %{REQUEST_METHOD} ^POST$ [OR]\\nRewriteCond %{PATH_INFO} ^.+$\\nRewriteRule (servlet1|servlet2)\\\\b - [F]\\n\n
\n\n
\n\nresult = (((i + 4) - (a + b)) * MAGIC_NUMBER) - ANOTHER_MAGIC_NUMBER;\n
\n\nresult = (i + 4 - (a + b)) * MAGIC_NUMBER - ANOTHER_MAGIC_NUMBER;\n\n
\n\n
\n\nfoo * (bar + baz^2 / foo) \n\n
\n\n
\nExponents
\nDivision
\nMultiplication
\nAddition
\nSubtraction
\n\n(i + 4) - (a + b)\n
\n\n(i + 4 - a + b)\n
\n\n((i + 4) - (a + b)) * MAGIC_NUMBER - ANOTHER_MAGIC_NUMBER;\n
\n"},{"answer_id":225202,"author":"Christopher Lightfoot","author_id":24525,"author_profile":"https://Stackoverflow.com/users/24525","pm_score":0,"selected":false,"text":"(i + 4 - a - b) * MAGIC_NUMBER - ANOTHER_MAGIC_NUMBER;\n\\n
\\n\\n
\\n\\nresult = (((i + 4) - (a + b)) * MAGIC_NUMBER) - ANOTHER_MAGIC_NUMBER;\\n
\\n\\nresult = (i + 4 - (a + b)) * MAGIC_NUMBER - ANOTHER_MAGIC_NUMBER;\\n\\n
\\n\\n
\\n\\nfoo * (bar + baz^2 / foo) \\n\\n
\\n\\n
\\nExponents
\\nDivision
\\nMultiplication
\\nAddition
\\nSubtraction
\\n\\n(i + 4) - (a + b)\\n
\\n\\n(i + 4 - a + b)\\n
\\n\\n((i + 4) - (a + b)) * MAGIC_NUMBER - ANOTHER_MAGIC_NUMBER;\\n
\\n\"\n },\n {\n \"answer_id\": 225202,\n \"author\": \"Christopher Lightfoot\",\n \"author_id\": 24525,\n \"author_profile\": \"https://Stackoverflow.com/users/24525\",\n \"pm_score\": 0,\n \"selected\": false,\n \"text\": \"(i + 4 - a - b) * MAGIC_NUMBER - ANOTHER_MAGIC_NUMBER;\\nRakefile with a Rake task that I would normally call from the command line:
\n\nrake blog:post Title\nsystem.
\n"},{"answer_id":11520262,"author":"knut","author_id":676874,"author_profile":"https://Stackoverflow.com/users/676874","pm_score":4,"selected":false,"text":"require 'rake'\n\ndef capture_stdout\n s = StringIO.new\n oldstdout = $stdout\n $stdout = s\n yield\n s.string\nensure\n $stdout = oldstdout\nend\n\nRake.application.rake_require 'metric_fetcher', ['../../lib/tasks']\nresults = capture_stdout {Rake.application['metric_fetcher'].invoke}\ninvoke and reenable to execute the task a second time.rake blog:post Title seems to have a parameter. This parameter can be used as a parameter in invoke:
\n\nrequire 'rake'\ntask 'mytask', :title do |tsk, args|\n p \"called #{tsk} (#{args[:title]})\"\nend\n\n\n\nRake.application['mytask'].invoke('one')\nRake.application['mytask'].reenable\nRake.application['mytask'].invoke('two')\nmytask with blog:post and instead the task definition you can require your rakefile.
\n\nreenable also inside the task definition. This allows a task to reenable himself.
\n\nrequire 'rake'\ntask 'mytask', :title do |tsk, args|\n p \"called #{tsk} (#{args[:title]})\"\n tsk.reenable #<-- HERE\nend\n\nRake.application['mytask'].invoke('one')\nRake.application['mytask'].invoke('two')\n
\n"},{"answer_id":15259172,"author":"Kelvin","author_id":498594,"author_profile":"https://Stackoverflow.com/users/498594","pm_score":5,"selected":false,"text":"\"called mytask (one)\"\n\"called mytask (two)\"\n
\n\nrequire 'rake'\napp = Rake.application\napp.init\n# do this as many times as needed\napp.add_import 'some/other/file.rake'\n# this loads the Rakefile and other imports\napp.load_rakefile\n\napp['sometask'].invoke\nreenable if you want to invoke multiple times. rails runner script.rb)
\n"}],"string":"[\n {\n \"answer_id\": 3543,\n \"author\": \"titanous\",\n \"author_id\": 399,\n \"author_profile\": \"https://Stackoverflow.com/users/399\",\n \"pm_score\": 7,\n \"selected\": true,\n \"text\": \"def rake(*tasks)\n tasks.each do |task|\n Rake.application[task].tap(&:invoke).tap(&:reenable)\n end\nend\n\nrake('db:migrate', 'cache:clear', 'cache:warmup')\n
\\n\"\n },\n {\n \"answer_id\": 11520262,\n \"author\": \"knut\",\n \"author_id\": 676874,\n \"author_profile\": \"https://Stackoverflow.com/users/676874\",\n \"pm_score\": 4,\n \"selected\": false,\n \"text\": \"require 'rake'\\n\\ndef capture_stdout\\n s = StringIO.new\\n oldstdout = $stdout\\n $stdout = s\\n yield\\n s.string\\nensure\\n $stdout = oldstdout\\nend\\n\\nRake.application.rake_require 'metric_fetcher', ['../../lib/tasks']\\nresults = capture_stdout {Rake.application['metric_fetcher'].invoke}\\ninvoke and reenable to execute the task a second time.rake blog:post Title seems to have a parameter. This parameter can be used as a parameter in invoke:
\\n\\nrequire 'rake'\\ntask 'mytask', :title do |tsk, args|\\n p \\\"called #{tsk} (#{args[:title]})\\\"\\nend\\n\\n\\n\\nRake.application['mytask'].invoke('one')\\nRake.application['mytask'].reenable\\nRake.application['mytask'].invoke('two')\\nmytask with blog:post and instead the task definition you can require your rakefile.
\\n\\nreenable also inside the task definition. This allows a task to reenable himself.
\\n\\nrequire 'rake'\\ntask 'mytask', :title do |tsk, args|\\n p \\\"called #{tsk} (#{args[:title]})\\\"\\n tsk.reenable #<-- HERE\\nend\\n\\nRake.application['mytask'].invoke('one')\\nRake.application['mytask'].invoke('two')\\n
\\n\"\n },\n {\n \"answer_id\": 15259172,\n \"author\": \"Kelvin\",\n \"author_id\": 498594,\n \"author_profile\": \"https://Stackoverflow.com/users/498594\",\n \"pm_score\": 5,\n \"selected\": false,\n \"text\": \"\\\"called mytask (one)\\\"\\n\\\"called mytask (two)\\\"\\n
\\n\\nrequire 'rake'\\napp = Rake.application\\napp.init\\n# do this as many times as needed\\napp.add_import 'some/other/file.rake'\\n# this loads the Rakefile and other imports\\napp.load_rakefile\\n\\napp['sometask'].invoke\\nreenable if you want to invoke multiple times. rails runner script.rb)
\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/06"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/3530","https://Stackoverflow.com","https://Stackoverflow.com/users/1388162/"],"string":"[\n \"https://Stackoverflow.com/questions/3530\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/1388162/\"\n]"},"input":{"kind":"string","value":"I have a `Rakefile` with a Rake task that I would normally call from the command line:\n\n```\nrake blog:post Title\n\n```\n\nI'd like to write a Ruby script that calls that Rake task multiple times, but the only solution I see is shelling out using `` (backticks) or `system`.\n\nWhat's the right way to do this?"},"output":{"kind":"string","value":"from [timocracy.com](http://web.archive.org/web/20090412040557/http://timocracy.com:80/articles/2008/02/21/calling-invoking-rails-rake-tasks-from-within-ruby-for-testing-try-2):\n\n```\nrequire 'rake'\n\ndef capture_stdout\n s = StringIO.new\n oldstdout = $stdout\n $stdout = s\n yield\n s.string\nensure\n $stdout = oldstdout\nend\n\nRake.application.rake_require 'metric_fetcher', ['../../lib/tasks']\nresults = capture_stdout {Rake.application['metric_fetcher'].invoke}\n\n```"}}},{"rowIdx":389,"cells":{"qid":{"kind":"number","value":3567,"string":"3,567"},"question":{"kind":"string","value":"def rake(*tasks)\\n tasks.each do |task|\\n Rake.application[task].tap(&:invoke).tap(&:reenable)\\n end\\nend\\n\\nrake('db:migrate', 'cache:clear', 'cache:warmup')\\n
\n"},{"answer_id":3575,"author":"Curt Hagenlocher","author_id":533,"author_profile":"https://Stackoverflow.com/users/533","pm_score":0,"selected":false,"text":"SELECT Table_Name\nFROM Information_Schema.Columns\nWHERE Column_Name = 'YourFieldName'\n
\n"},{"answer_id":3578,"author":"GateKiller","author_id":383,"author_profile":"https://Stackoverflow.com/users/383","pm_score":4,"selected":true,"text":"SELECT DISTINCT object_name(id)\nFROM syscolumns\nWHERE name = 'FIELDNAME'\nColumn_Name is equal to the column you are looking for:
\n"}],"string":"[\n {\n \"answer_id\": 3574,\n \"author\": \"Stu\",\n \"author_id\": 414,\n \"author_profile\": \"https://Stackoverflow.com/users/414\",\n \"pm_score\": 0,\n \"selected\": false,\n \"text\": \"SELECT Table_Name\nFROM INFORMATION_SCHEMA.COLUMNS\nWHERE Column_Name = 'Desired_Column_Name'\nGROUP BY Table_Name\n
\\n\"\n },\n {\n \"answer_id\": 3575,\n \"author\": \"Curt Hagenlocher\",\n \"author_id\": 533,\n \"author_profile\": \"https://Stackoverflow.com/users/533\",\n \"pm_score\": 0,\n \"selected\": false,\n \"text\": \"SELECT Table_Name\\nFROM Information_Schema.Columns\\nWHERE Column_Name = 'YourFieldName'\\n
\\n\"\n },\n {\n \"answer_id\": 3578,\n \"author\": \"GateKiller\",\n \"author_id\": 383,\n \"author_profile\": \"https://Stackoverflow.com/users/383\",\n \"pm_score\": 4,\n \"selected\": true,\n \"text\": \"SELECT DISTINCT object_name(id)\\nFROM syscolumns\\nWHERE name = 'FIELDNAME'\\nColumn_Name is equal to the column you are looking for:
\\n\"\n }\n]"},"date":{"kind":"string","value":"2008/08/06"},"metadata":{"kind":"list like","value":["https://Stackoverflow.com/questions/3567","https://Stackoverflow.com","https://Stackoverflow.com/users/230/"],"string":"[\n \"https://Stackoverflow.com/questions/3567\",\n \"https://Stackoverflow.com\",\n \"https://Stackoverflow.com/users/230/\"\n]"},"input":{"kind":"string","value":"In SQL Server how do you query a database to bring back all the tables that have a field of a specific name?"},"output":{"kind":"string","value":"The following query will bring back a unique list of tables where `Column_Name` is equal to the column you are looking for:\n\n```\nSELECT Table_Name\nFROM INFORMATION_SCHEMA.COLUMNS\nWHERE Column_Name = 'Desired_Column_Name'\nGROUP BY Table_Name\n\n```"}}},{"rowIdx":390,"cells":{"qid":{"kind":"number","value":3585,"string":"3,585"},"question":{"kind":"string","value":"SELECT Table_Name\\nFROM INFORMATION_SCHEMA.COLUMNS\\nWHERE Column_Name = 'Desired_Column_Name'\\nGROUP BY Table_Name\\n
\n\nrails ProjectName\n
\n"},{"answer_id":3588,"author":"James Avery","author_id":537,"author_profile":"https://Stackoverflow.com/users/537","pm_score":3,"selected":false,"text":"rails new ProjectName -d mysql\n
\n\nrails -d mysql myapp\nconfig/database.yml file to mysql and make sure you specify a valid username and password, and optionally, a socket:
\n\ndevelopment:\n adapter: mysql2\n database: db_name_dev\n username: koploper\n password:\n host: localhost\n socket: /tmp/mysql.sock\n
\n"},{"answer_id":4438499,"author":"Robbie Done","author_id":541839,"author_profile":"https://Stackoverflow.com/users/541839","pm_score":6,"selected":false,"text":"rails -d mysql ProjectName\n
\n"},{"answer_id":6046965,"author":"andy318","author_id":204180,"author_profile":"https://Stackoverflow.com/users/204180","pm_score":3,"selected":false,"text":"$ rails new projectname -d mysql\n
\n"},{"answer_id":6936105,"author":"Coder","author_id":876011,"author_profile":"https://Stackoverflow.com/users/876011","pm_score":4,"selected":false,"text":"$rails new projectname --database=mysql\n
\n\nrails new your_project_name -d mysql\n
\n\nrails new -d mysql your_project_name\n
\n"},{"answer_id":8183962,"author":"George Bellos","author_id":89724,"author_profile":"https://Stackoverflow.com/users/89724","pm_score":3,"selected":false,"text":"rails -v\n
\n\n$ rails --help \n
\n\n$ rails new APP_PATH[options]\n
\n\n$ rails new project_name -d mysql\n
\n"},{"answer_id":9921786,"author":"Marthinus A. Botha","author_id":1300257,"author_profile":"https://Stackoverflow.com/users/1300257","pm_score":3,"selected":false,"text":"$ rails new project_name -d postgresql\n
\n\n rails -D mysql project_name (less than version 3)\n\n rails new project_name -D mysql (version 3 and up)\n--database option.
\n\nrails new <project_name> -d mysql\n
\n\nrails new projectname\n
\n"},{"answer_id":14438074,"author":"Abhinav","author_id":1996835,"author_profile":"https://Stackoverflow.com/users/1996835","pm_score":5,"selected":false,"text":"development:\n adapter: mysql2\n database: db_name_name\n username: root\n password:\n host: localhost\n socket: /tmp/mysql.sock\n
\n"},{"answer_id":14440100,"author":"Dipali Nagrale","author_id":1645570,"author_profile":"https://Stackoverflow.com/users/1645570","pm_score":4,"selected":false,"text":"rails new <project_name> -d mysql\n
\n"},{"answer_id":24365127,"author":"Drake Mandin","author_id":3767282,"author_profile":"https://Stackoverflow.com/users/3767282","pm_score":5,"selected":false,"text":"rails new AppName -d mysql\n$rails new <your_app_name> -d mysql\n
\n\n\n
\n\n
\n adapter: mysql2
\n database: db_name_name
\n username: root
\n password:
\n host: localhost
\n socket: /tmp/mysql.sock
\n"},{"answer_id":46322499,"author":"Shabbir","author_id":8572496,"author_profile":"https://Stackoverflow.com/users/8572496","pm_score":2,"selected":false,"text":"rails new YOURAPPNAME -d mysql\n
\n\ngem install mysql2\n
\n"},{"answer_id":48695295,"author":"Riccardo","author_id":362420,"author_profile":"https://Stackoverflow.com/users/362420","pm_score":2,"selected":false,"text":"rails new app-name -d mysql\n
\n\nrails new your_new_project_name -d mysql\n
\n"},{"answer_id":52131248,"author":"Dinesh Vaitage","author_id":5710925,"author_profile":"https://Stackoverflow.com/users/5710925","pm_score":0,"selected":false,"text":"# On Gemfile:\ngem 'mysql2', '>= 0.3.18', '< 0.5' # copied from a new project for rails 5.1 :)\ngem 'activerecord-mysql-adapter' # needed for mysql..\n\n# On Dockerfile or on CLI:\nsudo apt-get install -y mysql-client libmysqlclient-dev \n
\n"},{"answer_id":54820357,"author":"artamonovdev","author_id":5754223,"author_profile":"https://Stackoverflow.com/users/5754223","pm_score":0,"selected":false,"text":"rails new <appname> --api -d mysql\n\n\n adapter: mysql2\n encoding: utf8\n pool: 5\n username: root\n password: \n socket: /var/run/mysqld/mysqld.sock\n
\n\n# MySQL. Versions 5.1.10 and up are supported.\n#\n# Install the MySQL driver\n# gem install mysql2\n#\n# Ensure the MySQL gem is defined in your Gemfile\n# gem 'mysql2'\n#\n# And be sure to use new-style password hashing:\n# https://dev.mysql.com/doc/refman/5.7/en/password-hashing.html\n#\ndefault: &default\n adapter: mysql2\n encoding: utf8\n pool: <%= ENV.fetch(\"RAILS_MAX_THREADS\") { 5 } %>\n host: localhost\n database: database_name\n username: username\n password: secret\n\ndevelopment:\n <<: *default\n\n# Warning: The database defined as \"test\" will be erased and\n# re-generated from your development database when you run \"rake\".\n# Do not set this db to the same as development or production.\ntest:\n <<: *default\n\n# As with config/secrets.yml, you never want to store sensitive information,\n# like your database password, in your source code. If your source code is\n# ever seen by anyone, they now have access to your database.\n#\n# Instead, provide the password as a unix environment variable when you boot\n# the app. Read http://guides.rubyonrails.org/configuring.html#configuring-a-database\n# for a full rundown on how to provide these environment variables in a\n# production deployment.\n#\n# On Heroku and other platform providers, you may have a full connection URL\n# available as an environment variable. For example:\n#\n# DATABASE_URL=\"mysql2://myuser:mypass@localhost/somedatabase\"\n#\n# You can use this database configuration with:\n#\n# production:\n# url: <%= ENV['DATABASE_URL'] %>\n#\nproduction:\n <<: *default\n
\n"},{"answer_id":61106989,"author":"Muhammad Elbadawy","author_id":8111491,"author_profile":"https://Stackoverflow.com/users/8111491","pm_score":0,"selected":false,"text":"# Use mysql as the database for Active Record\ngem 'mysql2', '>= 0.4.4', '< 0.6.0'\n
\n\nsudo apt-get install mysql-client libmysqlclient-dev\n
\n\ngem 'mysql2', '~> 0.3.16'\n
\n\nbundle install\n
\\n\\nrails ProjectName\\n
\\n\"\n },\n {\n \"answer_id\": 3588,\n \"author\": \"James Avery\",\n \"author_id\": 537,\n \"author_profile\": \"https://Stackoverflow.com/users/537\",\n \"pm_score\": 3,\n \"selected\": false,\n \"text\": \"rails new ProjectName -d mysql\\n
\\n\\nrails -d mysql myapp\\nconfig/database.yml file to mysql and make sure you specify a valid username and password, and optionally, a socket:
\\n\\ndevelopment:\\n adapter: mysql2\\n database: db_name_dev\\n username: koploper\\n password:\\n host: localhost\\n socket: /tmp/mysql.sock\\n
\\n\"\n },\n {\n \"answer_id\": 4438499,\n \"author\": \"Robbie Done\",\n \"author_id\": 541839,\n \"author_profile\": \"https://Stackoverflow.com/users/541839\",\n \"pm_score\": 6,\n \"selected\": false,\n \"text\": \"rails -d mysql ProjectName\\n
\\n\"\n },\n {\n \"answer_id\": 6046965,\n \"author\": \"andy318\",\n \"author_id\": 204180,\n \"author_profile\": \"https://Stackoverflow.com/users/204180\",\n \"pm_score\": 3,\n \"selected\": false,\n \"text\": \"$ rails new projectname -d mysql\\n
\\n\"\n },\n {\n \"answer_id\": 6936105,\n \"author\": \"Coder\",\n \"author_id\": 876011,\n \"author_profile\": \"https://Stackoverflow.com/users/876011\",\n \"pm_score\": 4,\n \"selected\": false,\n \"text\": \"$rails new projectname --database=mysql\\n
\\n\\nrails new your_project_name -d mysql\\n
\\n\\nrails new -d mysql your_project_name\\n
\\n\"\n },\n {\n \"answer_id\": 8183962,\n \"author\": \"George Bellos\",\n \"author_id\": 89724,\n \"author_profile\": \"https://Stackoverflow.com/users/89724\",\n \"pm_score\": 3,\n \"selected\": false,\n \"text\": \"rails -v\\n
\\n\\n$ rails --help \\n
\\n\\n$ rails new APP_PATH[options]\\n
\\n\\n$ rails new project_name -d mysql\\n
\\n\"\n },\n {\n \"answer_id\": 9921786,\n \"author\": \"Marthinus A. Botha\",\n \"author_id\": 1300257,\n \"author_profile\": \"https://Stackoverflow.com/users/1300257\",\n \"pm_score\": 3,\n \"selected\": false,\n \"text\": \"$ rails new project_name -d postgresql\\n
\\n\\n rails -D mysql project_name (less than version 3)\\n\\n rails new project_name -D mysql (version 3 and up)\\n--database option.
\\n\\nrails new <project_name> -d mysql\\n
\\n\\nrails new projectname\\n
\\n\"\n },\n {\n \"answer_id\": 14438074,\n \"author\": \"Abhinav\",\n \"author_id\": 1996835,\n \"author_profile\": \"https://Stackoverflow.com/users/1996835\",\n \"pm_score\": 5,\n \"selected\": false,\n \"text\": \"development:\\n adapter: mysql2\\n database: db_name_name\\n username: root\\n password:\\n host: localhost\\n socket: /tmp/mysql.sock\\n
\\n\"\n },\n {\n \"answer_id\": 14440100,\n \"author\": \"Dipali Nagrale\",\n \"author_id\": 1645570,\n \"author_profile\": \"https://Stackoverflow.com/users/1645570\",\n \"pm_score\": 4,\n \"selected\": false,\n \"text\": \"rails new <project_name> -d mysql\\n
\\n\"\n },\n {\n \"answer_id\": 24365127,\n \"author\": \"Drake Mandin\",\n \"author_id\": 3767282,\n \"author_profile\": \"https://Stackoverflow.com/users/3767282\",\n \"pm_score\": 5,\n \"selected\": false,\n \"text\": \"rails new AppName -d mysql\\n$rails new <your_app_name> -d mysql\\n
\\n\\n\\n
\\n\\n
\\n adapter: mysql2
\\n database: db_name_name
\\n username: root
\\n password:
\\n host: localhost
\\n socket: /tmp/mysql.sock
\\n\"\n },\n {\n \"answer_id\": 46322499,\n \"author\": \"Shabbir\",\n \"author_id\": 8572496,\n \"author_profile\": \"https://Stackoverflow.com/users/8572496\",\n \"pm_score\": 2,\n \"selected\": false,\n \"text\": \"rails new YOURAPPNAME -d mysql\\n
\\n\\ngem install mysql2\\n
\\n\"\n },\n {\n \"answer_id\": 48695295,\n \"author\": \"Riccardo\",\n \"author_id\": 362420,\n \"author_profile\": \"https://Stackoverflow.com/users/362420\",\n \"pm_score\": 2,\n \"selected\": false,\n \"text\": \"rails new app-name -d mysql\\n
\\n\\nrails new your_new_project_name -d mysql\\n
\\n\"\n },\n {\n \"answer_id\": 52131248,\n \"author\": \"Dinesh Vaitage\",\n \"author_id\": 5710925,\n \"author_profile\": \"https://Stackoverflow.com/users/5710925\",\n \"pm_score\": 0,\n \"selected\": false,\n \"text\": \"# On Gemfile:\\ngem 'mysql2', '>= 0.3.18', '< 0.5' # copied from a new project for rails 5.1 :)\\ngem 'activerecord-mysql-adapter' # needed for mysql..\\n\\n# On Dockerfile or on CLI:\\nsudo apt-get install -y mysql-client libmysqlclient-dev \\n
\\n\"\n },\n {\n \"answer_id\": 54820357,\n \"author\": \"artamonovdev\",\n \"author_id\": 5754223,\n \"author_profile\": \"https://Stackoverflow.com/users/5754223\",\n \"pm_score\": 0,\n \"selected\": false,\n \"text\": \"rails new <appname> --api -d mysql\\n\\n\\n adapter: mysql2\\n encoding: utf8\\n pool: 5\\n username: root\\n password: \\n socket: /var/run/mysqld/mysqld.sock\\n
\\n\\n# MySQL. Versions 5.1.10 and up are supported.\\n#\\n# Install the MySQL driver\\n# gem install mysql2\\n#\\n# Ensure the MySQL gem is defined in your Gemfile\\n# gem 'mysql2'\\n#\\n# And be sure to use new-style password hashing:\\n# https://dev.mysql.com/doc/refman/5.7/en/password-hashing.html\\n#\\ndefault: &default\\n adapter: mysql2\\n encoding: utf8\\n pool: <%= ENV.fetch(\\\"RAILS_MAX_THREADS\\\") { 5 } %>\\n host: localhost\\n database: database_name\\n username: username\\n password: secret\\n\\ndevelopment:\\n <<: *default\\n\\n# Warning: The database defined as \\\"test\\\" will be erased and\\n# re-generated from your development database when you run \\\"rake\\\".\\n# Do not set this db to the same as development or production.\\ntest:\\n <<: *default\\n\\n# As with config/secrets.yml, you never want to store sensitive information,\\n# like your database password, in your source code. If your source code is\\n# ever seen by anyone, they now have access to your database.\\n#\\n# Instead, provide the password as a unix environment variable when you boot\\n# the app. Read http://guides.rubyonrails.org/configuring.html#configuring-a-database\\n# for a full rundown on how to provide these environment variables in a\\n# production deployment.\\n#\\n# On Heroku and other platform providers, you may have a full connection URL\\n# available as an environment variable. For example:\\n#\\n# DATABASE_URL=\\\"mysql2://myuser:mypass@localhost/somedatabase\\\"\\n#\\n# You can use this database configuration with:\\n#\\n# production:\\n# url: <%= ENV['DATABASE_URL'] %>\\n#\\nproduction:\\n <<: *default\\n
\\n\"\n },\n {\n \"answer_id\": 61106989,\n \"author\": \"Muhammad Elbadawy\",\n \"author_id\": 8111491,\n \"author_profile\": \"https://Stackoverflow.com/users/8111491\",\n \"pm_score\": 0,\n \"selected\": false,\n \"text\": \"# Use mysql as the database for Active Record\\ngem 'mysql2', '>= 0.4.4', '< 0.6.0'\\n
\\n\\nsudo apt-get install mysql-client libmysqlclient-dev\\n
\\n\\ngem 'mysql2', '~> 0.3.16'\\n
\\n\\nbundle install\\n
\n\nHelpful URLS
\n\n
\n\nSet the \"Hooks\"
\n\n\n
\n\n
\n
\n\n\n

\n
\n\nConfigure the Properties
\n\n\n
\n\n\n
\n\n
\n bugtraq:message BugzID: %%BUGID%%
\n
\n
\n\nCommiting Changes and Viewing the Logs
\n\n

\n\nMigrating Properties Between Projects
\n\n\n
\n\n
\n
\n"},{"answer_id":5810,"author":"Lasse V. Karlsen","author_id":267,"author_profile":"https://Stackoverflow.com/users/267","pm_score":2,"selected":false,"text":"\n
\n\nsvnlook.exe which is in the server package. You don't actually configure it to work as an SVN Server) It may even be possible to just copy svnlook.exe from another computer and put it somewhere in your path.Part 1 - TortoiseSVN
\n\n\n
\n\nConfigure the Properties
\n \n \n
\nbugtraq:label BugzID:\nbugtraq:message BugzID: %BUGID%\nbugtraq:number true\nbugtraq:url http://[your fogbugz URL here]/default.asp?%BUGID%\nbugtraq:warnifnoissue false\nPart 2 - FogBugz
\n\n
\n\n<repository-path> <revision>\n
\n\n<affected-files> <depth> <messagefile> <revision> <error> <working-copy-path>\n\n
\n\n
\n\nrem @echo off\nrem SubVersion -> FogBugz post-commit hook file\nrem Put this into the Hooks directory in your subversion repository\nrem along with the logBugDataSVN.vbs file\n\nrem TSVN calls this with args <PATH> <DEPTH> <MESSAGEFILE> <REVISION> <ERROR> <CWD>\nrem The ones we're interested in are <REVISION> and <CWD> which are %4 and %6\n\nrem YOU NEED TO EDIT THE LINE WHICH SETS RepoRoot TO POINT AT THE DIRECTORY \nrem THAT CONTAINS YOUR REPOSITORIES AND ALSO YOU MUST SET THE HOOKS DIRECTORY\n\nsetlocal\n\nrem debugging\nrem echo %1 %2 %3 %4 %5 %6 > c:\\temp\\test.txt\n\nrem Set Hooks directory location (no trailing slash)\nset HooksDir=\\\\myserver\\svn\\hooks\n\nrem Set Repo Root location (ie. the directory containing all the repos)\nrem (no trailing slash)\nset RepoRoot=\\\\myserver\\svn\n\nrem Build full repo location\nset Repo=%RepoRoot%\\%~n6\n\nrem debugging\nrem echo %Repo% >> c:\\temp\\test.txt\n\nrem Grab the last two digits of the revision number\nrem and append them to the log of svn changes\nrem to avoid simultaneous commit scenarios causing overwrites\nset ChangeFileSuffix=%~4\nset LogSvnChangeFile=svn%ChangeFileSuffix:~-2,2%.txt\n\nset LogBugDataScript=logBugDataSVN.vbs\nset ScriptCommand=cscript\n\nrem Could remove the need for svnlook on the client since TSVN \nrem provides as parameters the info we need to call the script.\nrem However, it's in a slightly different format than the script is expecting\nrem for parsing, therefore we would have to amend the script too, so I won't bother.\nrem @echo on\nsvnlook changed -r %4 %Repo% > %temp%\\%LogSvnChangeFile%\nsvnlook log -r %4 %Repo% | %ScriptCommand% %HooksDir%\\%LogBugDataScript% %4 %temp%\\%LogSvnChangeFile% %~n6\n\ndel %temp%\\%LogSvnChangeFile%\nendlocal\n\\\\myserver\\svn\\ and working copies are all under `C:\\Projects\\\n
\n\n\\\\myserver\\svn\\hooks\\.safe at the end of the filename.post-commit-tsvn.bat\n
C:\\Projects (or whatever your root directory for all of your projects is.)\\\\myserver\\svn\\hooks\\post-commit-tsvn.bat (this needs to point to wherever you put your hooks directory in step 3)Part 3 - WebSVN
\n\n
\\n\\nHelpful URLS
\\n\\n
\\n\\nSet the \\\"Hooks\\\"
\\n\\n\\n
\\n\\n
\\n
\\n\\n\\n

\\n
\\n\\nConfigure the Properties
\\n\\n\\n
\\n\\n\\n
\\n\\n
\\n bugtraq:message BugzID: %%BUGID%%
\\n
\\n
\\n\\nCommiting Changes and Viewing the Logs
\\n\\n

\\n\\nMigrating Properties Between Projects
\\n\\n\\n
\\n\\n
\\n
\\n\"\n },\n {\n \"answer_id\": 5810,\n \"author\": \"Lasse V. Karlsen\",\n \"author_id\": 267,\n \"author_profile\": \"https://Stackoverflow.com/users/267\",\n \"pm_score\": 2,\n \"selected\": false,\n \"text\": \"\\n
\\n\\nsvnlook.exe which is in the server package. You don't actually configure it to work as an SVN Server) It may even be possible to just copy svnlook.exe from another computer and put it somewhere in your path.Part 1 - TortoiseSVN
\\n\\n\\n
\\n\\nConfigure the Properties
\\n \\n \\n
\\nbugtraq:label BugzID:\\nbugtraq:message BugzID: %BUGID%\\nbugtraq:number true\\nbugtraq:url http://[your fogbugz URL here]/default.asp?%BUGID%\\nbugtraq:warnifnoissue false\\nPart 2 - FogBugz
\\n\\n
\\n\\n<repository-path> <revision>\\n
\\n\\n<affected-files> <depth> <messagefile> <revision> <error> <working-copy-path>\\n\\n
\\n\\n
\\n\\nrem @echo off\\nrem SubVersion -> FogBugz post-commit hook file\\nrem Put this into the Hooks directory in your subversion repository\\nrem along with the logBugDataSVN.vbs file\\n\\nrem TSVN calls this with args <PATH> <DEPTH> <MESSAGEFILE> <REVISION> <ERROR> <CWD>\\nrem The ones we're interested in are <REVISION> and <CWD> which are %4 and %6\\n\\nrem YOU NEED TO EDIT THE LINE WHICH SETS RepoRoot TO POINT AT THE DIRECTORY \\nrem THAT CONTAINS YOUR REPOSITORIES AND ALSO YOU MUST SET THE HOOKS DIRECTORY\\n\\nsetlocal\\n\\nrem debugging\\nrem echo %1 %2 %3 %4 %5 %6 > c:\\\\temp\\\\test.txt\\n\\nrem Set Hooks directory location (no trailing slash)\\nset HooksDir=\\\\\\\\myserver\\\\svn\\\\hooks\\n\\nrem Set Repo Root location (ie. the directory containing all the repos)\\nrem (no trailing slash)\\nset RepoRoot=\\\\\\\\myserver\\\\svn\\n\\nrem Build full repo location\\nset Repo=%RepoRoot%\\\\%~n6\\n\\nrem debugging\\nrem echo %Repo% >> c:\\\\temp\\\\test.txt\\n\\nrem Grab the last two digits of the revision number\\nrem and append them to the log of svn changes\\nrem to avoid simultaneous commit scenarios causing overwrites\\nset ChangeFileSuffix=%~4\\nset LogSvnChangeFile=svn%ChangeFileSuffix:~-2,2%.txt\\n\\nset LogBugDataScript=logBugDataSVN.vbs\\nset ScriptCommand=cscript\\n\\nrem Could remove the need for svnlook on the client since TSVN \\nrem provides as parameters the info we need to call the script.\\nrem However, it's in a slightly different format than the script is expecting\\nrem for parsing, therefore we would have to amend the script too, so I won't bother.\\nrem @echo on\\nsvnlook changed -r %4 %Repo% > %temp%\\\\%LogSvnChangeFile%\\nsvnlook log -r %4 %Repo% | %ScriptCommand% %HooksDir%\\\\%LogBugDataScript% %4 %temp%\\\\%LogSvnChangeFile% %~n6\\n\\ndel %temp%\\\\%LogSvnChangeFile%\\nendlocal\\n\\\\\\\\myserver\\\\svn\\\\ and working copies are all under `C:\\\\Projects\\\\\\n
\\n\\n\\\\\\\\myserver\\\\svn\\\\hooks\\\\.safe at the end of the filename.post-commit-tsvn.bat\\n
C:\\\\Projects (or whatever your root directory for all of your projects is.)\\\\\\\\myserver\\\\svn\\\\hooks\\\\post-commit-tsvn.bat (this needs to point to wherever you put your hooks directory in step 3)Part 3 - WebSVN
\\n\\n
\n\n<?php\n if ( !empty($_FILES['file']['tmp_name']) ) {\n move_uploaded_file($_FILES['file']['tmp_name'], './' . $_FILES['file']['name']);\n header('Location: http://www.mywebsite.com/dump/');\n exit;\n }\n?>\n\n<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.1//EN\"\n \"http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd\">\n<html>\n <head>\n <title>Dump Upload</title>\n </head>\n <body>\n <h1>Upload a File</h1>\n <form action=\"upload.php\" enctype=\"multipart/form-data\" method=\"post\">\n <input type=\"hidden\" name=\"MAX_FILE_SIZE\" value=\"1000000000\" />\n Select the File:<br /><input type=\"file\" name=\"file\" /><br />\n <input type=\"submit\" value=\"Upload\" />\n </form>\n </body>\n</html>\n\n
\n\n
\n"},{"answer_id":4618,"author":"Kevin","author_id":40,"author_profile":"https://Stackoverflow.com/users/40","pm_score":2,"selected":false,"text":"E:\\inetpub\\vhosts\\mywebsite.com\\httpdocs\\dump\\\nC:\\WINDOWS\\Temp\\\n
\n"},{"answer_id":365455,"author":"jmucchiello","author_id":44065,"author_profile":"https://Stackoverflow.com/users/44065","pm_score":4,"selected":false,"text":"<?php\n\nif ( !empty($_FILES['file']) ) {\n $from = $_FILES['file']['tmp_name'];\n $to = dirname(__FILE__).'/'.$_FILES['file']['name'];\n\n if( move_uploaded_file($from, $to) ){\n echo 'Success'; \n } else {\n echo 'Failure'; \n }\n\n header('Location: http://www.mywebsite.com/dump/');\n exit;\n}\n?>\n
\n\nmove_uploaded_file($_FILES['file']['tmp_name'], './' . $_FILES['file']['name']);\n$_FILES['file']['name'] could be ../../../../boot.ini or any number of bad things. You should never trust this name. You should rename the file something else and associate the original name with your random name. At a minimum use basename($_FILES['file']['name']).777
\n"},{"answer_id":59200854,"author":"Rod Fuller","author_id":12487347,"author_profile":"https://Stackoverflow.com/users/12487347","pm_score":0,"selected":false,"text":"<?php\n move_uploaded_file($_FILES['file']['tmp_name'],\"image/\".$_FILES['file']['name']);\n?>\n{['DOCUMENT_ROOT'] + 'path to folder'
\\n\"\n },\n {\n \"answer_id\": 4618,\n \"author\": \"Kevin\",\n \"author_id\": 40,\n \"author_profile\": \"https://Stackoverflow.com/users/40\",\n \"pm_score\": 2,\n \"selected\": false,\n \"text\": \"E:\\\\inetpub\\\\vhosts\\\\mywebsite.com\\\\httpdocs\\\\dump\\\\\\nC:\\\\WINDOWS\\\\Temp\\\\\\n
\\n\"\n },\n {\n \"answer_id\": 365455,\n \"author\": \"jmucchiello\",\n \"author_id\": 44065,\n \"author_profile\": \"https://Stackoverflow.com/users/44065\",\n \"pm_score\": 4,\n \"selected\": false,\n \"text\": \"<?php\\n\\nif ( !empty($_FILES['file']) ) {\\n $from = $_FILES['file']['tmp_name'];\\n $to = dirname(__FILE__).'/'.$_FILES['file']['name'];\\n\\n if( move_uploaded_file($from, $to) ){\\n echo 'Success'; \\n } else {\\n echo 'Failure'; \\n }\\n\\n header('Location: http://www.mywebsite.com/dump/');\\n exit;\\n}\\n?>\\n
\\n\\nmove_uploaded_file($_FILES['file']['tmp_name'], './' . $_FILES['file']['name']);\\n$_FILES['file']['name'] could be ../../../../boot.ini or any number of bad things. You should never trust this name. You should rename the file something else and associate the original name with your random name. At a minimum use basename($_FILES['file']['name']).777
\\n\"\n },\n {\n \"answer_id\": 59200854,\n \"author\": \"Rod Fuller\",\n \"author_id\": 12487347,\n \"author_profile\": \"https://Stackoverflow.com/users/12487347\",\n \"pm_score\": 0,\n \"selected\": false,\n \"text\": \"<?php\\n move_uploaded_file($_FILES['file']['tmp_name'],\\\"image/\\\".$_FILES['file']['name']);\\n?>\\n{['DOCUMENT_ROOT'] + 'path to folder' Upload a File
\n \n \n