Power query недостаточно памяти
Добрый день! Каждый день банк присылает выписку по оборотам за предыдущий день, выписка автоматически скачивается в .xsls в папку. Стоит задача собрать все выписки в один файл. Эту задачу я решаю посредством Power Query. Сама суть понятна как это сделать. Вопрос в том что загружается только первые 20 строк. Остальные нет. Подскажите что можно сделать? Причем если внизу таблицы в выписке удалить строку с итогом оборотов (строки 160 и 161 в файле), то загружаются все строки. Банк присылает именно так. Стоит задача автоматизировать весь процесс, вручную удалять не вариант из каждого файла. Помогите советом.
Добрый день! Каждый день банк присылает выписку по оборотам за предыдущий день, выписка автоматически скачивается в .xsls в папку. Стоит задача собрать все выписки в один файл. Эту задачу я решаю посредством Power Query. Сама суть понятна как это сделать. Вопрос в том что загружается только первые 20 строк. Остальные нет. Подскажите что можно сделать? Причем если внизу таблицы в выписке удалить строку с итогом оборотов (строки 160 и 161 в файле), то загружаются все строки. Банк присылает именно так. Стоит задача автоматизировать весь процесс, вручную удалять не вариант из каждого файла. Помогите советом. Novell
Добрый день.
Удаление строки тут ни при чем. У меня файл пример все подгрузил (168 строк в запросе)
Тут надо с параметрами выгрузки работать.
Недавно был похожий случай. Параметры выгрузки (формат и т.п.) человек изменить не мог, решилось программным пересохранением файла. Т.е. по сути открыл файл, нажал сохранить, закрыл и в Power Query все отобразилось.
Добрый день.
Удаление строки тут ни при чем. У меня файл пример все подгрузил (168 строк в запросе)
Тут надо с параметрами выгрузки работать.
Недавно был похожий случай. Параметры выгрузки (формат и т.п.) человек изменить не мог, решилось программным пересохранением файла. Т.е. по сути открыл файл, нажал сохранить, закрыл и в Power Query все отобразилось. sboy
Hi Fran. A few questions:
- What kinds of data sources are you accessing?
- How much data are you attempting to load?
- Are you loading to the Excel data model, or to an Excel sheet? (or both)
Here are some general tips for dealing with memory issues in Power Query:
- If possible, switch to the 64-bit version of PQ/Excel (I think you mentioned in another thread that this wasn't an option for you)
- Make the data as small as possible before loading it to Excel. Do your filtering, removal of columns, etc. up front instead of post-load.
- Load to the Excel data model, rather than an Excel sheet. (The data model is compressed and designed for high performance.) You can then build pivot tables on top of the data model.
If you can share your queries (try opening them in the Advanced Editor to see their full text), or at least what kinds of operations you're doing on the data (such as merges, appends, pivots/unpivots, etc.), that would be helpful.
Ultimately you can always run Power Query out of memory, since you could use it to load arbitrarily large amounts of data into Excel. But hopefully we can help you figure out how to avoid this for your scenario.
Thanks for getting back to me so soon.
To answer your questions in order:
- What kinds of data sources are you accessing?
Excel tables and .CSV files
Excel Tables (2), ~300 records each
.CSV Table ~8000 records (Stock data feed)
Initially to two Excel sheets, now went totally Connection only (Excel 2013 does not allow
Connect to Data Model - does it?)
Referring to your Tips to manage this issue:
- Cannot go to 64 bit as client is not going there. Also understand that 64 bit may not address the issue - and I have 32 bit 2010 which is the clients level of use.
- This just in - I found that I had two queries that were referencing each other (my bad), which possibly contributed to memory escalation. Fixed this by restructuring the app. However, not the total solution.
- Also (since this did not fully address the issue) I parted out the app into three separate modules. This moderated the issue somewhat, although memory use still seemed to run away at random instances. I used Windows Task Manager to monitor memory use and save interim copies before it locked up. Curiously enough, just listening to my laptop fan was enough to notify of memory use and cpu load.
- I would be happy to share my code and data with you if you feel it might be constructive.
- My research on the issue turned up a lot of feedback on memory usage and race conditions - but nothing substantive in way of a solution.
- It would be nice to isolate the condition but have no access or skill to debug.
In summary, parting out the app. and absolutely not loading to tables seemed the best strategy. The jury is out on the influence of (regular) pivot table usage as a contributory factor. I have also Excel 2016 Pro Plus (365) on my system - and have found that Excel 2016 (although reporting itself as up to date) has warned me that the app - last loaded and saved under Excel 2010 - now loading in 2016 may not work fully under 2016. Go figure. Eg. Csv.Document apparently gets treated differently in 2010 and 2016 - causing an exception to do with columns, which I have not fully investigated.
It is sad though that this memory issue is a general one - according to the web search - and is apparently still an issue at this time. Also sad that the integration within Excel appears so loose. Not sure why PQ and PV need to be under the Excel umbrella anyway. although.
PQ has added several orders of magnitude of functionality to Excel without which we would be stuck in the doldrums of VBA.
I am sure that a book on the development of PQ would make awesome reading.
Thanks for your time again Ehren.
I'd recommend trying the following:
- Instead of using Excel connections, try loading to the Data Model (which is supported in Excel 2013)
- To create a pivot table based on a Data Model table, see this tutorial (just skip the "Import" section, as you'll be importing data via PQ)
- Use the Send a Frown feature in PQ to get a list of all your formulas, and paste them here (assuming they don't contain sensitive info). This will help us recommend other things you might be able to do to improve performance. (For example, I'm not entirely sure what you mean by dividing the app into three separate modules. But seeing the formulas will clarify this.)
Memory issues like this are not "sad" (unless they're due to bugs). They're just an inherent part of wrangling large amounts of data within a 32-bit architecture. For example, you can easily write a one-line M expression that will run your computer out of CPU or RAM. That's the power of the system we created. but with great power comes great responsibility. :)
Thanks again Ehren;
In response to your last - Excel 2013 is not an option for the client (still on 2010). I therefore am using 2010 and 2016 to develop the app. I have parted out the full app to three modules - VALIDATE (check for data file consistency and integration of keys), REVIEW (Review current stock positions with each investor, using slicers to focus on key parameters) and OPENPOS (Allow broker to determine which accounts have open positions and what stocks to buy to complete portfolios). Actually there is a fourth module TRADE which is not developed yet.
I tried building a Frown package but it seems to be too sparse to allow one to make a determination. I can prepare a data & app package and put it up on a One Drive share if that works for you. I have no problem with sharing the code, and the data files have been redacted. Let me know.
On a minor topic - There is a disconnect between 2010 and 2016 in the Csv.Document M Function. The below screen shot is the result of switching between 2016 and 2010 while developing the VALIDATE module with Csv.Document function. Recall that I have to develop down to 2010 at the moment.
This is going from 2010 up to 2016 as I discovered that the Excel tables were not being refreshed after the PQ scripts had been completed in 2010 - 2016 was OK.
Removing the offending parameter in the 2016 version of the User Defined Function fnPrice fixed the problem.
2010 Version:
2016 Version:
If you can refer me to links that address memory escalation and variances between implementations (such as the above) then please let me know. I do not want to take your time on issues that are or have been resolved.
BTW - Is there a publicly accessible registry of PQ/PP/PBI issues that are being triaged?
(I would like to avoid reworking old ground and instead give my vote on issues that are already registered and classified as to priority.)
I am getting Out of memory error" while arranging the data(notepad file around 325 MB containing rows ~8 million rows)
the calculation showing in status bar goes upto 552 MB and then this error come.
I am currently using Window 7 , 64 bit and office 2010 professional Plus 32 bit 16GB ram.
Could you please suggest any workaround.
Thanks in advance for your valuable suggestion.
Answers
The likely culprit is the Pivot step, which will cause the entire file to be read into memory.
If you can't omit the Pivot, another option would be to try 64-bit Excel/PQ.
- Proposed as answer by Ehren - MSFT Microsoft employee Tuesday, February 9, 2016 6:14 PM
- Marked as answer by Michael Amadi Sunday, February 14, 2016 9:54 AM
Try this. First, create a query called ConvertRecords and use the following formula text:
Next, create another query with the following formula text:
Let me know if this works for you.
- Edited by Ehren - MSFT Microsoft employee Friday, March 18, 2016 11:32 PM
- Marked as answer by AskQuery1984 Sunday, April 10, 2016 1:16 PM
Hi Rajender. Here's another log parsing solution to try that should be faster than the one listed above.
- Proposed as answer by Ehren - MSFT Microsoft employee Tuesday, April 19, 2016 7:52 PM
- Edited by Ehren - MSFT Microsoft employee Thursday, April 21, 2016 7:11 PM Corrected the solution
- Marked as answer by AskQuery1984 Friday, April 22, 2016 9:23 AM
Ah, you're right. I've updated the code above. Let me know if it works properly now.
The key to making this particular solution perform well is to pass GroupKind.Local when grouping the rows, which tells Table.Group that the rows which make up a given group are all contiguous.
- Edited by Ehren - MSFT Microsoft employee Thursday, April 21, 2016 7:12 PM
- Marked as answer by AskQuery1984 Thursday, November 3, 2016 7:03 AM
All replies
Using 64 bit Office could help. Also, can you share your queries with us? Maybe we can suggest some optimizations.
I am using the below mentioned code in order to arrange the ssh log :-
Previously the data size in notepad file was around 200 MB and containing approx. 8 million rows of data.
and by using the below mentioned query the imported rows converted down to around 4 million rows.
but now this data is continuously increasing.
Please suggest your valuable suggestion :-
The likely culprit is the Pivot step, which will cause the entire file to be read into memory.
If you can't omit the Pivot, another option would be to try 64-bit Excel/PQ.
- Proposed as answer by Ehren - MSFT Microsoft employee Tuesday, February 9, 2016 6:14 PM
- Marked as answer by Michael Amadi Sunday, February 14, 2016 9:54 AM
I am using Excel 2013 64 bit but same error :-(.
If you can suggest any modification for the above code.
Thanks in advance.
Hi Rajender. If you can provide a small sample file, we may be able to help you come up with an alternate solution.
I uploaded the sample file in below given link.
Thanks in advance for your assistance.
Hi Rajender. The main "trick" here seems to be not splitting the file into lines at the very start. Instead, if you split it by logical delimiters, you then end up with every record on its own line, which means you don't have to do a pivot, group, or other inefficient operation to join multiple rows back together.
I've started to do that in the following M. Hopefully you can take this and run with it:
EDIT: I've just been informed that Text . FromBinary will read the entire file into memory. However, in my local testing, the above approach seemed to perform much better than the Unpivot/Pivot solution. Let me know if it works for you.
Thanks a lot for your valuable suggestion, but unfortunately I am getting the same error.
This time after 5-10 second I am getting this error.
Try this. First, create a query called ConvertRecords and use the following formula text:
Next, create another query with the following formula text:
Let me know if this works for you.
- Edited by Ehren - MSFT Microsoft employee Friday, March 18, 2016 11:32 PM
- Marked as answer by AskQuery1984 Sunday, April 10, 2016 1:16 PM
I am little confused, How to create 2 different queries.
Currently I copy paste the first code and paste in advance editor and its showing error.
To create a new query, just click "From Other Sources" in the Excel PQ ribbon and select "Blank Query".
What error are you getting with the first query? It's a function, so you won't see a table when you preview it.
I am getting below mentioned error while copy pasting it two my tracker:-
Error in 1st query :-
Error 2nd query
Please suggest. I think I am missing something.
Hi Rajender. The 1st query result is not an error, and appears to be correct. The 2nd query's error indicates you need to rename the 1st query to be called ConvertRecords. Please give that a try and let me know it that allows it to work.
Thanks a lot , this time query successfully showing the output. I am now checking the same if all the values are ok ?
Meanwhile there are few findings as mentioned below :-
1) Old query previously taking approx. 15 minutes , this query takes approx. 1.15 hrs.
2) I tested both the queries and found there is a difference of 3-4 rows in output result. Currently checking the same , will get back to you.
Once again thanks a ton for your valuable support.
Sorry for the late response, I was on Holidays due to ill health.
I checked the above given code and its working fine. I extra rows which are reflecting may be due to some blank rows.
I want to learn more about power Query, If you can suggest me where I can found step by step video tutorials specially for data importing from other resources like SAP, Azure, Oracle. etc.
Thanks a ton, once again for your valuable support.
Hi Rajender. Glad it's working for you. For tutorials, your best bet would probably be to search the web for the specific scenarios you're interested in.
Sure I will search accordingly.
If you can assist me for the below mentioned issue too. if its possible via Power Query ?
Hi Rajender. Here's another log parsing solution to try that should be faster than the one listed above.
- Proposed as answer by Ehren - MSFT Microsoft employee Tuesday, April 19, 2016 7:52 PM
- Edited by Ehren - MSFT Microsoft employee Thursday, April 21, 2016 7:11 PM Corrected the solution
- Marked as answer by AskQuery1984 Friday, April 22, 2016 9:23 AM
This new code looking to get the output very fast, but the output result showing double time I think.
I checked this new code in old file, where output result is of around 6 lakh of rows data, but in new query it goes upto 12 lakh, and at final error with rows limitation.
Could you please assist.
Ah, you're right. I've updated the code above. Let me know if it works properly now.
The key to making this particular solution perform well is to pass GroupKind.Local when grouping the rows, which tells Table.Group that the rows which make up a given group are all contiguous.
- Edited by Ehren - MSFT Microsoft employee Thursday, April 21, 2016 7:12 PM
- Marked as answer by AskQuery1984 Thursday, November 3, 2016 7:03 AM
Thanks a ton, for your support. The code is looking very fast working also I cross checked the data almost 99% accurate. I need to cross check it again in new raw data report(last time itself a gap in raw data).
Will get back to you if still in case any issue.
I hope I may add to the discussion here although it appears that the thread is a little stale - however the issue is still current based on my online research.
I am running 32 bit as 64 bit is not an option (for the points made in the Office Download Site. )
I rebuilt my app using "Connection Only" without memory issues. However, as soon as I went to load a query to a table I noticed that memory usage increased continually until it froze with a memory error at 900M-1G.
Curiously enough, this issue has just become a roadblock for me in the past few weeks. Wondering if an "upgrade" may have triggered this.
Since the issue arose as soon as I went from Connection to Load to Table I feel that code optimization may not be part of the solution.
When I work in the preview - I am expecting the next step to apper instantly
Instead it is reading ALL THE FILES' DATA OVER AGAIN
Why?! Soo slowwww and not efficient.
Please fix - only when I click "Refresh data" - should it do all the process over.
Please fix, please!
Answers
Hi Michael. Were you able to come up with an anonymized version of your scenario?
- Proposed as answer by Ehren - MSFT Microsoft employee Thursday, September 13, 2018 4:59 PM
- Marked as answer by Ehren - MSFT Microsoft employee Monday, December 3, 2018 7:09 PM
All replies
Hi Michael. What data source(s) are you pulling from? CSV files on SharePoint? Excel files on a network folder? Something else?
Very few (5-8) and very small (up to 1 Mb)
What bothers me is that every time I add a step, just a simple step like "change a column name" or multiply a column by a number - it starts to load all the files AGAIN (it shows this at the bottom).
I already have all the preview I need. I just want to rename one column, not to load everything over again.
I find myself WAITING on PowerQuery 80% of my time. 80% of my time. 80% WAITING.
Don't get me wrong - I LOVE you, guys and I admire all the innovation you make with Power BI, Query, etc.
But my User Experience of WAITING for Power Query is AWFUL.
One day I will become a Program Manger of all Microsoft's Data Platform products and I we will definitely focus on the UX.
Anyways - can you help, please?
Hi Michael. Thanks for the additional detail. Are these files .xlsx, .xls, or .xlsb?
Ehren, thank you for your prompt response!
I've doubled checked - there are 6 .xlsx files in network folder, each file is 4.3 Mb and there are 2 more .xlsx files: one is tiny 70 Kb and the other one is 10Mb .xlsx in other network folder
I am running Excel 2013 with Power Query Add-on Version 2.55.5010.1402 64-bit
Thank you for help.
I'll buy you a beer if you solve this!
Hi Michael. Can you try downloading the latest version of the PQ plugin? If that doesn't improve things, can you record a video of what you're seeing and share a link to it here? For example, renaming a column shouldn't cause PQ to re-pull the underlying data. If you're seeing something different, we'd like to get to the bottom of it.
Also, if you could share the M formula text of your queries that would also be helpful. You can copy this from the Advanced Editor in the Power Query Editor.
Thank you very much for trying to help.
Because the files contain sensitive data, I cannot sent them as is. But since you are really want to get to the bottom of it - I will gladly invest two-three hours to change the numbers to random and sent the folder to you so you can improve the Power Query product. I'll get to it this week
Hi Michael. Were you able to come up with an anonymized version of your scenario?
- Proposed as answer by Ehren - MSFT Microsoft employee Thursday, September 13, 2018 4:59 PM
- Marked as answer by Ehren - MSFT Microsoft employee Monday, December 3, 2018 7:09 PM
Not yet - I am working to finish the project for my client and then I can change the actual data for anonymized.
I had to build all from the scratch and to separate the queries to different files :(
It worked OK at the beginning but then - stuck again. waiting and waiting for each step for several minutes.
I've noticed though that the slowdown began when I "Duplicated" two of my queries.
Maybe a DUPLICATION of a query creates some sort of dependency bug?
One thing to try would be to Reference your queries instead of duplicating them. Duplicating them may end up causing PQ to do duplicate work.
I'm experiencing the exact same scenario as OP. Like the OP, my files are private.
I have a query which loads files from CSV (about 5 of them). They are large but not exceptionsally so - about 100,000 rows and 100 columns. Call this query "CSVExtracts".
My main quesy is starting with = CSV Extracts.
Then does a bunch of steps, deleting columns, filtering rows, deriving some new columns.
Bizarrely, this used to work OK, now it seems to grind to a halt.
Just like OP says, making aby change even renaming a column, or even renaming a step in the visual editor just make thes whole thing bog down My memory (maybe unreliable, it used to work fine, now it just doesn't).
PQ has now become unusable.
I will make another thread if needed, but the symptom is exactly like those described so it seems sensible to keep it here.
Should have added ;
Excel 2016 v 16.0.4738 (which looks like a recent update to me ? ) 64 bit version
Running on Win 7 64-bit on regular i5 laptop with 8GB RAM.
Hi there. Can you share the version of PQ you're running? To find it, do the following:
- Click the Data tab of the ribbon
- Click Get Data (or New Query, if you are running Excel build prior to 8067.2115) > Query Options
- Click Diagnostics. You should see the Power Query version printed there.
The optionNew Query/Query Options/DIagnostics/Version returns :
2.61.5192.461 64 bit
By the way this is a corporate licence of Office Pro Plus, it is running on locked-down laptop, and the security patches etc are out of my control.
I went back and looked at some other examples, including the original version of this same query (which only operated on data loaded from 1 file ). I'm finding the same thing - excruciatingly slow performance in the Preview within Power Query Editor.
Some more investigations reveal
which suggests to turn off the option
'Allow data prrview to download in the background'.
I've now done that and the file I am interested in seems to be performing much better. Still not great though, given I'm not really doing anything very intensive (once I load the data I am filtering to just a few 100 or 1000 rows, and adding new columns etc). e.g. takes about 10 seconds if I change the parameter of a integer divide step
Also not clear why this behaviour wasn't happening and then started to happen. Could this be because a new version of Excel 2016 somehow changed default settings.
I have noticed the same behavior. And it gets slower with each action. Since PQ is a recorder/interpreter and must be running from some sort of stack, perhaps the stack gets copied onto the stack and then the change is added to the top. In the olden days we might have called that memory leakage.
Right now I am hung in Excel 365 with a "Document Recovery" screen showing over my sheet. The Query Editor is running but it is greyed-out. Task Manager says 2 to 10% cpu and 48% Memory.
I suspect I will have to kill excel to get out.
I am having a terrible time with the refreshes. I have buffered as many tables as possible, broken down queries into small steps, Turned off Background Refresh. Tried Fast Data Load, didn't do much. Turned off GPU Hardware Acceleration. Only have 1 Excel file open. Outlook and SFB are running too.
Even a simple change, like a field name, causes the preview to be regenerated. I agree on the 80% waiting time, CPU = 100%, Memory typically 90%. this particular project has cost me 2 full weekends plus every evening and I'm still not done.
Using Office 365, Semi-Annual Channel, version 1803 (Build 9126.2315 Click to Run), Lenovo T470, 2 cores, 4 processors, 3.49 GHz, 8 Gb RAM, HDD: 239GB SSD.
Hi JVD_123. Sorry to hear about your refresh trouble. Can you clarify what you're seeing when you change a field name? Is it when the field name changes in the data source, or when you rename a column in the PQ Editor? If it's the latter, the change should be nearly instantaneous.
If you'd like to share more details of your scenario privately, please PM me on the Power BI Community.
I'm so glad to see that I'm not the only one experiencing this. I have also spent many hours just waiting. It's very frustrating. I'm not sure what the issue is because it wasn't so bad at first, but after reading through some comments I wonder if it had something to do with duplicating a query. I have 2 folders with date/country/product data where I need 3 columns from one type of file and one column from the other. After setting up a fantastic query to get the data just right from the first folder, I duplicated the query and changed the source and one step in the query (remove other columns). I feel like this might be when it started becoming ridiculously slow. Changing anything in the PQ editor is now EXCRUCIATING. Refreshing or even opening the PQ editor, I can see it loading and it seems to be going through all of the files multiple times. I did just do a Windows update, I don't know if that might have something to do with it. Anyhow, glad that it's not just me because I'm new to this and still learning the ins and outs.
Hi EmmaBDub. What kind of files do these folders contain? Are you combining the data from these two folders (via Merge or Append), or just pulling them in separately? How many files does each folder contain, and how large are the files in total?
Just want to follow this one up as I have the same issue with my excel/query editor. my query was running quite well without any problems, but has become so slow from one point to another. i even cannot open the query editor correctly, the window only shows the ribbon, the rest is white and i cannot do anything.
Connected databases are Access (OLEDB) and some excel files. I don't use duplicate queries, but have a few merging steps included. I disabled "allow data preview to download in the background", enabled "fast data load", and i don't load data to the data model.
I feel like this has changed since an update (Excel or PBI), but i am not sure. My current version is:
Excel 2016 (16.0.10730.20280) 64-bit
Query Editor 2.61.5192.1301 64-bit
Добрый день.
При обновлении отчетов в PowerPivot, часто наблюдаю картину когда оперативная память и процессор ноутбка работают наполную.
В ноутбуке 8 гиг оперативной памяти, процессор i7 2620m 2,7 ( 2 ядра = 4 потока ).
Кто то знает сколько может использовать Power Pivot / Power Query ресурсов (сколько ядер/потоков, оперативной памяти (частота памяти), важен ли ssd/hdd)?
Добрый день.
При обновлении отчетов в PowerPivot, часто наблюдаю картину когда оперативная память и процессор ноутбка работают наполную.
В ноутбуке 8 гиг оперативной памяти, процессор i7 2620m 2,7 ( 2 ядра = 4 потока ).
Кто то знает сколько может использовать Power Pivot / Power Query ресурсов (сколько ядер/потоков, оперативной памяти (частота памяти), важен ли ssd/hdd)? VipeRock
Кто то знает сколько может использовать Power Pivot / Power Query ресурсов (сколько ядер/потоков, оперативной памяти (частота памяти), важен ли ssd/hdd)? Автор - VipeRock
Дата добавления - 14.12.2017 в 16:54
Добрый.
Я регулярно загружаю свои 8 ядер на 100% (Excel-ем), оперативка при этом используется процентов на 20-30. ssd дает очень хорошую прибавку по сравнению с hdd
Добрый.
Я регулярно загружаю свои 8 ядер на 100% (Excel-ем), оперативка при этом используется процентов на 20-30. ssd дает очень хорошую прибавку по сравнению с hdd pabchek
"Учиться, учиться и еще раз учиться!"
WM: R399923528092
регулярно загружаю свои 8 ядер на 100% (Excel-ем), оперативка при этом используется процентов на 20-30
а какой у вас процессор и сколько памяти? это точно при использовании Power Pivot / Power Query ?
п.с.: у меня такое было при пересчете стандартными формулами екселя. .
регулярно загружаю свои 8 ядер на 100% (Excel-ем), оперативка при этом используется процентов на 20-30
а какой у вас процессор и сколько памяти? это точно при использовании Power Pivot / Power Query ?
п.с.: у меня такое было при пересчете стандартными формулами екселя. . VipeRock
регулярно загружаю свои 8 ядер на 100% (Excel-ем), оперативка при этом используется процентов на 20-30
а какой у вас процессор и сколько памяти? это точно при использовании Power Pivot / Power Query ?
п.с.: у меня такое было при пересчете стандартными формулами екселя. . Автор - VipeRock
Дата добавления - 14.12.2017 в 17:14
может использовать все что есть. (х64)
Если не оптимальный или неправильный запрос - можно и сервак положить, не то что ноутбук.
Говорю из личного опыта - случайно для ДЖОИНа выбрал не те поля - нагрузка ЦП сервака (16ядер * 65 ГБ оперативки) стала 100% - хорошо, что вовремя заметил - убил процесс через пару минут.
формулы они тоже разные бывают. Попробуйте запустить суммеслимн на все строки с диапазоном просмотра всего столбца - тоже мало не покажется
может использовать все что есть. (х64)
Если не оптимальный или неправильный запрос - можно и сервак положить, не то что ноутбук.
Говорю из личного опыта - случайно для ДЖОИНа выбрал не те поля - нагрузка ЦП сервака (16ядер * 65 ГБ оперативки) стала 100% - хорошо, что вовремя заметил - убил процесс через пару минут.
формулы они тоже разные бывают. Попробуйте запустить суммеслимн на все строки с диапазоном просмотра всего столбца - тоже мало не покажется SLAVICK
может использовать все что есть. (х64)
Если не оптимальный или неправильный запрос - можно и сервак положить, не то что ноутбук.
Говорю из личного опыта - случайно для ДЖОИНа выбрал не те поля - нагрузка ЦП сервака (16ядер * 65 ГБ оперативки) стала 100% - хорошо, что вовремя заметил - убил процесс через пару минут.
формулы они тоже разные бывают. Попробуйте запустить суммеслимн на все строки с диапазоном просмотра всего столбца - тоже мало не покажется Автор - SLAVICK
Дата добавления - 14.12.2017 в 17:36
а какой из процессоров лучше райзен х1700 или интел i7 8700/7700, мало ли кто то пытался понять . .
если не секрет, откуда даные тянуться (не с 1с)?
п.с.: амд привлекает тем что еще выйдут процессоры, больше ядер (+2 ядра), а интел частотой ядер и тем что меньше нюансов
а какой из процессоров лучше райзен х1700 или интел i7 8700/7700, мало ли кто то пытался понять . .
если не секрет, откуда даные тянуться (не с 1с)?
п.с.: амд привлекает тем что еще выйдут процессоры, больше ядер (+2 ядра), а интел частотой ядер и тем что меньше нюансов VipeRock
если не секрет, откуда даные тянуться (не с 1с)?
п.с.: амд привлекает тем что еще выйдут процессоры, больше ядер (+2 ядра), а интел частотой ядер и тем что меньше нюансов Автор - VipeRock
Дата добавления - 15.12.2017 в 10:42
формулы они тоже разные бывают. Попробуйте запустить суммеслимн на все строки с диапазоном просмотра всего столбца - тоже мало не покажется
формулы они тоже разные бывают. Попробуйте запустить суммеслимн на все строки с диапазоном просмотра всего столбца - тоже мало не покажется
формулы они тоже разные бывают. Попробуйте запустить суммеслимн на все строки с диапазоном просмотра всего столбца - тоже мало не покажется
актально для суммпроизв, формул массива. даже счетесли не чувствителен к размеру диапазона. Автор - dude
Дата добавления - 16.12.2017 в 18:05
Sub QueryTimer()
Dim StartTime As Long, EndTime As Long
Application.Calculation = xlCalculationManual
[c2:c5000].FormulaR1C1 = "=RANDBETWEEN(1,5000)"
[A2:b5000].FormulaR1C1 = "=CHAR(RANDBETWEEN(97,122))&CHAR(RANDBETWEEN(97,122))&CHAR(RANDBETWEEN(97,122))"
[a2:c5000].Calculate
Cells(Rows.Count, "F") = 1
StartTime = timeGetTime()
[D2:D5000].Calculate
'Остановка часов и вывод результата
EndTime = timeGetTime()
[d1] = "T " & (EndTime - StartTime) / 1000 & " sec"
'Запуск часов.
[E2:E5000].FormulaR1C1 = "=SUMIFS(C3,C1,RC1,C2,RC2)"
'Остановка часов и вывод результата
EndTime = timeGetTime()
[e1] = "T " & (EndTime - StartTime) / 1000 & " sec"
Application.Calculation = xlCalculationAutomatic
End Sub
Sub QueryTimer()
Dim StartTime As Long, EndTime As Long
Application.Calculation = xlCalculationManual
[c2:c5000].FormulaR1C1 = "=RANDBETWEEN(1,5000)"
[A2:b5000].FormulaR1C1 = "=CHAR(RANDBETWEEN(97,122))&CHAR(RANDBETWEEN(97,122))&CHAR(RANDBETWEEN(97,122))"
[a2:c5000].Calculate
Cells(Rows.Count, "F") = 1
StartTime = timeGetTime()
[D2:D5000].Calculate
'Остановка часов и вывод результата
EndTime = timeGetTime()
[d1] = "T " & (EndTime - StartTime) / 1000 & " sec"
'Запуск часов.
[E2:E5000].FormulaR1C1 = "=SUMIFS(C3,C1,RC1,C2,RC2)"
'Остановка часов и вывод результата
EndTime = timeGetTime()
[e1] = "T " & (EndTime - StartTime) / 1000 & " sec"
Application.Calculation = xlCalculationAutomatic
End Sub
Sub QueryTimer()
Dim StartTime As Long, EndTime As Long
Application.Calculation = xlCalculationManual
[c2:c5000].FormulaR1C1 = "=RANDBETWEEN(1,5000)"
[A2:b5000].FormulaR1C1 = "=CHAR(RANDBETWEEN(97,122))&CHAR(RANDBETWEEN(97,122))&CHAR(RANDBETWEEN(97,122))"
[a2:c5000].Calculate
Cells(Rows.Count, "F") = 1
StartTime = timeGetTime()
[D2:D5000].Calculate
'Остановка часов и вывод результата
EndTime = timeGetTime()
[d1] = "T " & (EndTime - StartTime) / 1000 & " sec"
'Запуск часов.
[E2:E5000].FormulaR1C1 = "=SUMIFS(C3,C1,RC1,C2,RC2)"
'Остановка часов и вывод результата
EndTime = timeGetTime()
[e1] = "T " & (EndTime - StartTime) / 1000 & " sec"
Application.Calculation = xlCalculationAutomatic
End Sub
Интересно. 2.5 сек против 91.
Подобный результат будет, если считать от печки. Но excel же умный и не делает полный пересчёт без необходимости. У меня пересчёт уже предобработанной структуры.
Здесь хорошо написано. Этот вопрос, наверное, требует отдельной темы: что является верной методикой оценки эффективности формул. Fastexcel - хороший пример, но как они считают?
Интересно. 2.5 сек против 91.
Подобный результат будет, если считать от печки. Но excel же умный и не делает полный пересчёт без необходимости. У меня пересчёт уже предобработанной структуры.
Здесь хорошо написано. Этот вопрос, наверное, требует отдельной темы: что является верной методикой оценки эффективности формул. Fastexcel - хороший пример, но как они считают? dude
Читайте также: