How fast is fast enough? Industry perspectives on LES

Hi all,

My group at Uppsala University has been working a lot on GPU-based large-eddy simulation (LES) for wind energy applications over the past years. We think that the computational efficiencies that we reach now could be good enough to allow for a wider adoption of LES in the industrial practice.
As a “reality check” we’re now conducting a survey about industry perspectives on the use of LES .
We think the results can be very helpful for anyone doing industry-oriented research on wind farm modeling and possibly also for university spin-offs in that area.

Would be great to get feedback from anyone with modelling experience in the industry:

Best,
Henrik

Great question, and welcome to the Induction Zone community, @henrikasmuth!

My feeling with all of these high-fidelity simulation tools is that there are several things in play:

  1. How feasible is it, for a new user to adopt the tools in the first place (i.e., the amount of learning and experience required until one is confident in the results)
  2. What is the potential gain in actionable knowledge that one gets - and does that justify the extra cost of the simulations?
  3. Is it possible to speed up the simulations to an extent that one gets results in a reasonable time, using tools or solutions that are available to the typical user (see 1)? For example, cloud compute servers might be helpful, but only if setting them up and getting the software running doesn’t require a PhD.
  4. Adoption would be simpler if there were a ready-to-deploy tool that reduced the learning curve and setup process, and it came with clear benefits.

Packaging it all up in a commercial spin-off would help enormously - that ensures long term availability of the technology and of support, which are one big barrier to adoption. I could definitely see there being an opportunity here for commercialisation.

Exactly. Among others, the relative importance of the aspects that you mention was one of our motivations to start the survey.
In research like ours, that focuses a lot on the computational efficiency, we often spend a lot of time optimizing for another 10% of performance. However, for an industrial application it might not matter too much if we’re 100 or 110 times faster than ordinary LES. The former might be more than sufficient. At least that is my current impression of the first responses and also in line with what you see from companies like Whiffle.
Looking forward to get a clearer picture at the end of the survey.

The use of LES has been something that had been pitched several times but I never got evidence of it being a better decision making tool compared to standard simple methods for resource assessment and operational assessment. The typical approach of “old stable quick method + a correction where needed” continues to be the standard and a frankly a big hurdle to overcome without that said evidence.

There are other factors that need to be taken into account. Knowing when a simple model works well and when it doesn’t work could be sufficient because it bounds your level of confidence during decision making. Emphasis on investment decision making here.

A more computational expensive tool that is not as close to plug and play as possible is going to be necessarily incredibly time consuming to be used to build a baseline, iterate, and create the bounds of level of confidence needed for decision making within an organization.

All of this said, I’ve seen big differences between a more pragmatic North-American approach (show me the data) vs. more open to theory->modeling->decision making approach in Europe.

@henrikasmuth
I have been looking at use cases for the pitching of our WakeBlaster code, so here is what I figure you need. I personally would be surprised if LES is today anywhere near that level…

Preconditions:

  • assume a typical wind farm size to be 100+ turbines, (to have significant wake effects)
  • with external turbines we talk for offshore farms about a few thousand turbines (ignore in first approx)
  • a gpu based calculation is at least same cost as a multi cpu based calculation (check cost!).
  • in each case assuming you run flow cases on 1000 cores in parallel (cost implications!)
  • your model actually improves accuracy
  • overhead time not considered

USE CASES

I) 60 core sec per flow case: client type: operator ; project phase: post construction: purpose: real time operation in parallel to a SCADA system with the aim to detect faults and improve operation. You need to process several variations / scenarios within 600 seconds and overhead to consider, thus 60 seconds is required. There is limited scope for parallel processing multiple flow cases.

II) 200 core sec per flow case: client type: operator : post construction analysis of 1 year of 10 min data roughly 50k flow cases. Clients do not want to wait more than 3 hours for the analysis, but flow cases can be processed in parallel on 1000 cores.

III) 800 core sec per flow case: client type developer/consultant, interative pre-construction analysis, each iteration (180 directions x 25 wind speeds, or 4500 flow cases) and clients are prepared to wait a maximum of 1 hours per energy assessment. Can be processed in parallel on 1000 cores.

IV) 40 core sec per flow case: client type consultant: times series based analysis, 15 years 1 hour time resolution makes 90k flow cases. Can be processed in parallel on 1000 cores. Clients are prepared to wait for 1 hour.

V) 3.4 core sec per flow case: client type; developer, layout optimisation, assume 10000 iterations with 2500 flow cases each. Clients are prepared to wait 24 hours for a result. Challenging to process in parallel but lets assume so.

To compare: our WakeBlaster RANS simulation takes 20 cores seconds per flow case with 100 turbines. So you could in principle run a layout optimisation but that applications remains challenging…

Why is a wake flow model so much more difficult than running a flow model:

  • you need a high directional resolution because you need to capture the wind farm geometry.
  • wake effects are wind speed and turbine state dependent
  • turbine characteristics change within the wind farm (density, turbulence)
  • you need a high enough resolution/domain to resove both the rotor scale < 0.1D and the wind farm scale >100 D
  • users need to constantly iterate layouts as parameters change
    If you disregard this then you will not gain on accuracy (see assumptions)

@henrikasmuth seems like this question has been getting a lot of interest! What’s your plan for consolidating and disseminating the results of the survey?

First of all, thanks Wolfgang and Gaetano for the interesting insights. It’s very good to get such detailed feedback in addition to the survey.
My first overall impression is that the view on potential use cases differs quite a lot. When trying to replace established models for WRA applications the required runtimes are obviously very short and probably unrealistic (similar to what Wolfgang outlined above). But potential use cases seem to be more diverse, e.g. using LES in addition to established models for only a selection of cases (particularly in complex terrain or for expected farm to farm interactions), or simply more regular applications in industrial research for cases where RANS is currently the standard. This also seems to be somewhat in line with the activities, of companies like e.g. Whiffle.

We’ll keep the survey online for 2 more weeks or so. We’ll then evaluate it and look, among others, at the runtime requirements people gave us for a typical case in their everyday business. We’ll then run a selection of generic farms corresponding to these cases and see if and how the required runtimes can be achieved. The final results will be presented at the Wake Conference next year. I might post some teasers of the results before.

Our overall motivation is (so far) purely academic. We would simply like to know if LES, the way we do it (using LBM and GPUs), is fast enough enable new applications in the industry. This can also help to target future research. For instance, do we need a lot more effort to get it even faster, or should we focus more on extending the model’s capabilities (after all, the model gets faster anyways, since GPU performances still increases significantly every year).

In year or so, we are also planning an open-source release of the model.

And of course, keep responding and sharing the survey if you haven’t, yet!
Seems like the statistics are converging by now, but a few more responses would be nice to make the survey more representative.