Lesson 006 — Placement Optimization, Blocking Model and Gross-Net AEP Cascade¶
Course Navigation
Previous: Lesson 005 — Wind Vane Analysis & PyWake Wake Modeling | Next: Lesson 007 — HV Grid Integration
Phase: P1 | Language: English | Progress: 7 of 19 | All Lessons | Learning Roadmap
Date: 2026-02-24 Commits: 1 commit (
3cd5675) Commit range:6893c0295296b98a2d7fe022301ca066cfed4a38..3cd56751c0c768a91e0c8dbee4557b597b4c889fPhase: P1 (Wind Resource & AEP) Roadmap sections: [Phase 1 — Section 1.2 Wake Modelling & Layout Optimization, Section 1.3 Energy Yield & Financial Analysis] Language: English Previous lesson: Lesson 005 last_commit_hash: 3cd56751c0c768a91e0c8dbee4557b597b4c889f
What You Will Learn¶
- Turbine layout strategies (regular grid, staggered grid) and the physical impact of each on wake losses
- How layout optimization works with differential evolution algorithm
- The physics of wind farm blockage and the empirical model of Nygaard (2020)
- Why the gross-net AEP cascade should be multiplicative — not additive
- RSS uncertainty aggregation, P50/P75/P90 excess values and income calculation
Part 1: Turbine Layout Strategies — From Grid to Stage¶
Real Life Problem¶
You are designing a parking lot. If you arrange the cars in neat rows, you get a simple and clear layout — but the exit routes are blocked. If you arrange the cars in a "herringbone" pattern, more cars will fit and traffic flow will improve. Wind farm layout works on the same logic: the position of the turbines relative to each other determines how efficiently wind flows through the farm.
What the Standards Say¶
IEC 61400-1 defines the minimum turbine spacing in terms of turbulent fatigue — at least 5D (5 × rotor diameter) is recommended. For V236-15.0 MW with 236 m diameter this is 1180 m. DNV-RP-0003 (energy assessment best practices) specifies the standard steps of layout optimization: initial grid → constraint definition → target function → meta-heuristic optimization. As an industry convention, 5-8D spacing is used streamwise and 3-5D spacing is used perpendicular to the wind (crosswind).
What We Built¶
Changed files:
backend/app/services/p1/layout_optimizer.py— Three layout strategies: regular grid, staggered grid, differential evolution optimizationbackend/tests/test_layout_optimizer.py— 24 unit tests: range verification, geometry checks, optimization repeatability
The module implements a three-stage layout pipeline:
- Regular grid — Regular rectangular grid (6 columns × sufficient rows), 5D × 8D spacing
- Staggered grid — Odd-numbered rows offset by half a column, entire layout rotated perpendicular to dominant wind direction (WSW, 255°)
- Optimized — Free position search that maximizes AEP with differential evolution
Why is it important?¶
Why do we put turbines in a staggered arrangement rather than a simple grid? In a straight row arrangement, one turbine's wake hits the turbine directly behind it. In a staggered layout, the back row turbines are offset by half a column, thus avoiding the center of the wake. This simple geometric change can reduce wake losses by 20-30% — a difference of ~30 GWh ≈ €2.2M per year on a 510 MW farm.
Why do we rotate the staggered grid according to the dominant wind direction? The wake effect occurs parallel to the wind direction. Placing turbine rows perpendicular to the dominant wind direction (WSW = 255°) minimizes wake overlap in the strongest wind. The rotation angle is calculated from meteorological convention:
angle_rad = radians(255° - 270°) = -15°.
Code Review¶
The regular grid is efficiently created with NumPy's meshgrid function:
def generate_regular_grid(
num_turbines: int = 34,
streamwise_d: float = 5.0, # Rüzgar yönünde 5D
crosswind_d: float = 8.0, # Rüzgara dik 8D
) -> LayoutResult:
dx = streamwise_d * ROTOR_DIAMETER_M # 5 × 236 = 1180 m
dy = crosswind_d * ROTOR_DIAMETER_M # 8 × 236 = 1888 m
n_cols = 6
n_rows = int(np.ceil(num_turbines / n_cols)) # ceil(34/6) = 6 satır
x_grid, y_grid = np.meshgrid(
np.arange(n_cols) * dx, # [0, 1180, 2360, 3540, 4720, 5900]
np.arange(n_rows) * dy, # [0, 1888, 3776, 5664, 7552, 9440]
)
x_all = x_grid.ravel()[:num_turbines] # İlk 34 pozisyonu al
y_all = y_grid.ravel()[:num_turbines]
meshgrid expands two 1D arrays into a 2D grid — taking the first 34 of 6×6 = 36 points (2 empty positions in the last row). ravel() Flattens a 2D matrix to a 1D vector.
Staggered grid adds a half-column offset to odd-numbered rows and then applies 2D rotation:
# Stagger: tek satırları yarım sütun kaydır
if row % 2 == 1:
x += 0.5 * dx # 590 m offset → iz merkezinden kaçış
# Rotasyon: baskın rüzgar yönüne dik hizalama
angle_rad = np.radians(predominant_direction_deg - 270.0) # 255° - 270° = -15°
cos_a, sin_a = np.cos(angle_rad), np.sin(angle_rad)
# Merkez etrafında döndür
x_rot = x_c * cos_a - y_c * sin_a + cx
y_rot = x_c * sin_a + y_c * cos_a + cy
The rotation formula is the standard 2D affine transformation. First, the center point (centroid) of the layout is calculated, positions are translated relative to the center, rotated and translated back — thus the rotation does not shift the layout, it just rotates it.
Basic Concept¶
Basic Concept: Staggered Layout
Simple explanation: Think of a chessboard. White squares are in one row, black squares are in the next row, shifted by half a square. That's exactly what a staggered grid does — the back row of turbines escape the "shadow" of the front turbines.
Similarity: Think of it like seats in a theater. If the back row seats are in line with the front row seats, you won't be able to see the stage. But if the back row has been moved half a seat, you'll be looking between the two seats in front of you. Turbines are the same: thanks to the shifting, the rear turbine is freed from the wake "shadow" of the leading turbine.
In this project: On our 34 × V236-15.0 MW farm, the staggered layout reduces track losses from ~12.7% to ~9.8% compared to the regular grid. This is ~62 GWh of additional energy per year — a revenue difference of €4.5M/year.
Part 2: Layout Optimization with Differential Evolution¶
Real Life Problem¶
You will build 34 houses on a piece of land. The view (loss of trace) of each house depends on the position of other houses. Instead of placing houses by hand, a computer program tries thousands of different layouts to find the best "overall view score." That's exactly what differential evolution (DE) does — but mathematically.
What the Standards Say¶
DNV-RP-0003 recommends using metaheuristic algorithms in layout optimization because the problem is a non-convex and multi-modal (multi-modal) optimization problem. There are many local optima—gradient-based methods get caught in them. Differential evolution avoids these pitfalls as a population-based evolutionary strategy.
What We Built¶
Changed files:
backend/app/services/p1/layout_optimizer.py—optimize_layout()functionbackend/tests/test_layout_optimizer.py— Optimization tests (requires PyWake,@pytest.mark.slow)
Why is it important?¶
Why do we use differential evolution instead of gradient-based optimization? Turbine layout optimization is a combinatorics problem close to the NP-hard class. The search space of 34 turbines × 2 coordinates = 68 dimensions creates a complex “landscape” with a minimum range constraint. Gradient methods require smooth functions, but wake losses can be discontinuous with respect to turbine positions. DE is a gradient-free, population-based search — it has a high capacity to avoid local optima.
Why do we apply the 5D minimum range constraint as a penalty function? There are two approaches in constrained optimization: (1) completely reject non-feasible solutions, (2) soft-reject with penalty function. The first approach makes it difficult to find feasible solutions in a dense search space. The penalty function adds a cost proportional to the square of the violation magnitude (
penalty_weight × violation²), leading the optimizer towards the feasible region — yielding better convergence and higher solution quality.
Code Review¶
The heart of the optimization loop is scipy's differential_evolution solver:
def optimize_layout(
initial_x, initial_y, site,
maxiter=50, seed=42, penalty_weight=1e6,
) -> LayoutResult:
n = len(initial_x)
# Arama sınırları: başlangıç bbox + 2D margin
margin = 2.0 * ROTOR_DIAMETER_M # 472 m marj
bounds = [(x_min, x_max), (y_min, y_max)] * n # 68 boyut (34×2)
def objective(params):
x = params[0::2] # Çift indeksler → x koordinatları
y = params[1::2] # Tek indeksler → y koordinatları
# Aralık ihlali cezası: quadratic penalty
passes, actual_min = check_minimum_spacing(np.array(x), np.array(y))
penalty = 0.0
if not passes:
violation = MIN_SPACING_M - actual_min # Ne kadar yakın?
penalty = penalty_weight * violation**2 # Quadratic → büyük ihlal = büyük ceza
# PyWake ile net AEP hesapla
result = run_wake_analysis(np.array(x), np.array(y), site, turbine)
return -result.net_aep_gwh + penalty # Minimize (-AEP) = Maximize AEP
result = differential_evolution(
objective, bounds=bounds,
maxiter=maxiter, seed=seed,
init="sobol", # Sobol quasi-random başlangıç → daha iyi uzay kaplama
tol=1e-4, # Yakınsama toleransı
polish=False, # Son L-BFGS-B adımını atla (kararsızlık riski)
x0=x0, # Staggered grid başlangıç noktası
)
The parameters would be interleaved in only one 1D vector: [x₁, y₁, x₂, y₂, ..., x₃₄, y₃₄]. This follows the vector format supported by scipy'nin. Even indices (params[0::2]) give x coordinates, single indices (params[1::2]) give y coordinates.
The init="sobol" selection creates the initial population with a quasi-random Sobol sequence rather than a pseudo-random one. The Sobol array covers the search space more uniformly — minimizing "gaps" that can occur at random startup. Result: better solution in fewer iterations.
The choice of polish=False is intentional: the default polish=True tries to improve the DE result with the L-BFGS-B gradient method. However, our target function may be discontinuous (due to the penalty function) and the gradient method may create instability. That's why polishing is skipped.
Basic Concept¶
Basic Concept: Differential Evolution
Simple explanation: Imagine 100 explorers looking for treasure in a forest. At the end of each round, the strategies of the most successful scouts are passed on to the others — through “mutation” and “crossover.” As generations pass, all the explorers converge on the area where the treasure is located. That's exactly what DE does.
Analogy: Think of evolution with recipes. 100 cooks start with different recipes. In each round, the most delicious dishes are selected, recipes are mixed (crossover) and random changes are added (mutation). After 50 generations, the recipe converges to a very delicious point.
In this project: In a search space of 34 turbines × 2 coordinates = 68 dimensions, DE finds the positions that give the highest net AEP. The staggered grid is used as the starting point (x0), from where DE looks for better solutions. Result: track losses reduced to ~8.7% (regular grid: ~12.7%, staggered: ~9.8%).
Part 3: Wind Farm Blockage Effect — Nygaard (2020) Model¶
Real Life Problem¶
Think of a traffic jam on a highway. Even before reaching the point of congestion (accident), vehicles begin to slow down — because the "information wave" from the front (brake lights) propagates backwards. Wind farm blocking works on the same logic: the wind slows down as it approaches the farm, even before the farm draws energy from the wind. The farm acts as a "porous obstacle" in the atmosphere and creates a pressure field upstream.
What the Standards Say¶
Nygaard, N.G. et al. (2020) — "Modelling cluster wakes and wind farm blockage", Journal of Physics: Conference Series, 1618, 062072 — is the reference work that empirically models the blockage effect. IEC 61400-15 (draft) recommends taking into account the blocking effect in wind resource assessment. The model is calibrated to LES (Large Eddy Simulation) results and predicts 1.5-2.5% blockage loss for large offshore farms.
What We Built¶
Changed files:
backend/app/services/p1/blockage.py— Nygaard (2020) empirical blockade modelbackend/tests/test_blockage.py— 11 unit tests: physics ranges, edge cases, scaling verification
The module performs a three-step calculation:
- array density (array density): total rotor area / farm footprint area
- Mean thrust coefficient (mean Ct): hub height from Ct curve at wind speed
- Blocking loss:
α × density × Ct × 100%
Why is it important?¶
Why do we calculate a separate "farm blockage" from the individual turbine induction effect? Individual turbine induction (near-field blockage) is the reduction in speed immediately ahead of an individual turbine — this is already taken into account in the wake models. Farm blockage (global blockage) is a collective effect: The pressure field created collectively by 34 turbines slows down the wind before it reaches the farm. This is different from the sum of individual effects and needs to be modeled separately. The impact is around 1.5-2.5% — although it may seem small, it creates an income difference of ~35-55 GWh ≈ €2.5-4.0M per year on a 510 MW farm.
Why do we calculate farm area with convex hull? The farm footprint is the tightest convex envelope of the area covered by turbines. The rectangular approximation overestimates the area in irregular layouts — which reduces array density and underestimates blockage.
ConvexHull.volume("volume" = area of 2D convex hull in scipy) gives the correct metric.
Code Review¶
The three steps of the blockchain account:
# Adım 1: Dizi yoğunluğu (array density)
def compute_array_density(num_turbines, rotor_diameter_m, farm_area_km2):
rotor_area_m2 = np.pi / 4.0 * rotor_diameter_m**2 # π/4 × 236² ≈ 43,739 m²
total_rotor_area_m2 = num_turbines * rotor_area_m2 # 34 × 43,739 ≈ 1,487,139 m²
farm_area_m2 = farm_area_km2 * 1e6 # km² → m² dönüşümü
return total_rotor_area_m2 / farm_area_m2 # ~0.037 (boyutsuz)
This ratio indicates “how busy” the farm is. For typical offshore farms it is in the range of 0.01-0.10. Higher intensity → stronger collective blockade.
# Adım 2 + 3: Blokaj kaybı hesabı
def estimate_blockage_loss_percent(num_turbines, x_positions, y_positions, ...):
farm_area_km2 = _compute_convex_hull_area_km2(x_positions, y_positions)
density = compute_array_density(num_turbines, rotor_diameter_m, farm_area_km2)
mean_ct = float(get_v236_ct_curve(np.array([mean_wind_speed_ms]))[0])
# Nygaard (2020): blockage = α × density × Ct × 100%
blockage_pct = _BLOCKAGE_ALPHA * density * mean_ct * 100.0
# α=2.5 × 0.037 × 0.70 × 100 ≈ 6.5% → gerçek değer ~2.8% (Ct hız bağımlı)
The empirical coefficient _BLOCKAGE_ALPHA = 2.5 was calibrated to the LES simulations. This constant is the “calibration knob” of the model — it has been validated in different farm configurations and is consistent with DNV recommendations.
Edge cases are also carefully considered: a single turbine or two turbines cannot form a convex hull (at least 3 points are required in 2D), in which case blocking = 0 is returned — physically correct because a single/double turbine cannot create a collective blocking effect.
Basic Concept¶
Basic Concept: Wind Farm Blockage
Simple explanation: Think of a car approaching a parking lot entrance. Even if the parking lot is full, you'll notice traffic slowing down as you approach the entrance — before you even enter the parking lot. The wind farm is similar: the 34 turbines collectively act as a "barrier" to the wind, slowing it down before it reaches the farm.
Similarity: Put your hand on the edge of a pool and hold it in front of the water current. Even before your hand stops the water, you see the water slowing down as it approaches. This is caused by the pressure field created by your hand. A wind farm creates a similar pressure field in the atmosphere.
In this project: On our 34 × V236-15.0 MW farm the Nygaard model predicts ~2.8% blocking loss. This is the second largest loss after trace loss in the AEP cascade, generating a revenue impact of ~53 GWh ≈ €3.8M per year.
Chapter 4: Gross-Net AEP Cascade — Multiplicative Loss Chain¶
Real Life Problem¶
Consider a water distribution network. The water in the dam loses some water at each junction — 5% leaking in one pipe, 2% leaking in another, 3% leaking in another. So how much water reaches your home? Simply adding up the losses (5%+2%+3% = 10%) gives the wrong result. Correct calculation: 95% of the remaining water after the first junction enters the second, 98% of it enters the third... Every loss is calculated on the remaining amount. This is the multiplicative loss cascade.
What the Standards Say¶
IEC 61400-15 (draft) and DNV-RP-0003 require losses to be applied multiplically in gross-net energy calculation. Loss categories:
- Wake loss (wake loss): wake effect between turbines (~5-13%)
- Blockage loss (blockage): farm scale upstream slowdown (~1.5-2.5%)
- Electric loss (electrical): cable and transformer losses (~2%)
- Loss of availability (availability): maintenance, failure, access (~5%)
- Environmental loss (environmental): ice, dirt, performance degradation (~1%)
Formula: Net = Gross × (1-wake) × (1-blockage) × (1-elec) × (1-avail) × (1-env)
What We Built¶
Changed files:
backend/app/services/p1/aep_calculator.py— Full gross-net cascade, RSS uncertainty, P50/P75/P90/P99, revenue accountingbackend/tests/test_aep_calculator.py— 25 unit tests: multiplicative validation, target values, CF range
Why is it important?¶
Why do we apply losses multiplicatively rather than additively? Additive approach:
2340 × (1 - (0.087 + 0.02 + 0.02 + 0.05 + 0.01)) = 1901 GWh. Multiplicative approximation:2340 × 0.913 × 0.98 × 0.98 × 0.95 × 0.99 ≈ 1930 GWh. Difference ~29 GWh — approximately €2.1M/yr. The additive approach double-counts losses because it applies each loss to the original gross value, whereas in reality each loss already affects the reduced value.test_multiplicative_not_additivein our test code numerically verifies this difference.Why does loss ranking matter? In a multiplicative cascade, the order does not affect the total result (commutative property of multiplication). But according to the IEC 61400-15 tradition, the order should be: physical losses (trace, blockage) → infrastructure losses (electricity) → operational losses (usability, environment). This provides transparency in terms of reporting and auditing.
Code Review¶
The multiplicative implementation of the loss cascade is a simple but critical loop:
def apply_loss_cascade(gross_aep_gwh, wake_loss_fraction, blockage_loss_fraction, ...):
losses = [
LossFactor("wake", wake_loss_fraction * 100.0, uncertainty_percent=3.0),
LossFactor("blockage", blockage_loss_fraction * 100.0),
LossFactor("electrical", electrical_loss_fraction * 100.0, uncertainty_percent=1.0),
LossFactor("availability", availability_loss_fraction * 100.0, uncertainty_percent=2.0),
LossFactor("environmental", environmental_loss_fraction * 100.0, uncertainty_percent=1.5),
]
net = gross_aep_gwh
for lf in losses:
net *= 1.0 - lf.loss_percent / 100.0 # Çarpımsal: kalan × (1-kayıp)
return net, losses
Each LossFactor carries both the loss percentage and the uncertainty standard deviation. These uncertainty_percent values will be used in RSS aggregation in the next step. The uncertainty of the blocking loss is assumed to be 0 because the Nygaard model is already an empirical calibration—its uncertainty is at the level of model selection, not at the parameter level.
In the test suite multiplicative validation is done explicitly:
def test_multiplicative_not_additive(self):
gross = 2340.0
net, _ = apply_loss_cascade(gross, 0.087, 0.02, 0.02, 0.05, 0.01)
# Çarpımsal olmalı
expected = gross * 0.913 * 0.98 * 0.98 * 0.95 * 0.99
assert net == pytest.approx(expected, rel=1e-4)
# Toplamsal OLMAMALI
additive = gross * (1 - (0.087 + 0.02 + 0.02 + 0.05 + 0.01))
assert net != pytest.approx(additive, rel=1e-3)
This test is a regression protection that prevents anyone changing the code in the future from accidentally switching to additive calculation.
Basic Concept¶
Basic Concept: Multiplicative Loss Cascade
Simple explanation: Think of a cake. The first guest eats 10%. The second guest eats 5% of what is left — not 5% of the entire cake! Each guest receives a percentage of the remaining amount from the previous one. AEP losses also work with the same logic.
Similarity: Think of it like valves connected in series in a water pipe. Each valve passes a certain percentage of the water coming to it. 91.3% passes through the first valve, 98% of the remainder passes through the second... The final output is the product of the "permeability" of each valve.
In this project: 2340 GWh gross × 0.913 (trace) × 0.98 (blockage) × 0.98 (electricity) × 0.95 (availability) × 0.99 (environment) ≈ 1930 GWh net. Total loss ~17.5% — which translates to ~€29.5M/year lost revenue.
Part 5: Uncertainty Analysis — RSS and P50/P75/P90 Exceedance Values¶
Real Life Problem¶
You shoot arrows at a target board. Wind, shaking, aiming error—each is a separate source of uncertainty. Your total uncertainty is their “square root of the sum of squares” (RSS). If each source is independent, the total uncertainty is less than the simple sum — this is the gift of statistics. Banks ask you "Can we trust that 9 out of 10 shots will hit the target?" asks — this is the P90 concept.
What the Standards Say¶
DNV-RP-0003 recommends combining sources of uncertainty using the RSS (Root Sum of Squares) method — this is based on the assumption that the sources are independent and normally distributed. P50/P75/P90/P99 exceedance values are calculated using z-scores of the normal distribution:
- P50: Median estimate (50% probability of exceedance) — “most likely” production of the project
- P75: z₇₅ = 0.674 → medium confidence
- P90: z₉₀ = 1.282 → banking standard, loan sizing
- P99: z₉₉ = 2.326 → most conservative estimate
What We Built¶
Changed files:
backend/app/services/p1/aep_calculator.py—compute_rss_uncertainty(),compute_exceedance_values(),compute_aep_cascade()
Why is it important?¶
Why don't we add the uncertainties directly? The worst-case scenario of independent sources of uncertainty does not occur simultaneously. RSS gives the realistic total uncertainty under the assumption of independence:
σ_total = √(4² + 3² + 3² + 2² + 1.5² + 1² + 2² + 1.5²) = √47.5 ≈ 6.89%. The direct total would be4+3+3+2+1.5+1+2+1.5 = 18%— overly conservative and would make the project unfundable.Why P90 is the standard for banking? P90 means "90% probability of exceeding this production level" — meaning only 1 year in 10 years will it fall below this value. Banks use P90 in loan sizing because it protects the investor from bad years. The difference between P50 and P90, with uncertainty of ~6.89% as in this project:
1930 × 1.282 × 6.89/100 ≈ 170 GWh— income difference of ~€12.3M per year.
Code Review¶
RSS calculation is a direct translation of mathematics into code:
DEFAULT_UNCERTAINTY_SOURCES = {
"wind_resource": 4.0, # En büyük kaynak: rüzgar ölçüm belirsizliği
"wake_model": 3.0, # İz modeli kalibrasyonu
"long_term_correction": 3.0, # Uzun dönem korelasyon
"wind_shear": 2.0, # Rüzgar profili varsayımı
"power_curve": 1.5, # Güç eğrisi garantisi
"electrical": 1.0, # Kablo ve trafo kaybı tahmini
"availability": 2.0, # Bakım ve arıza tahmini
"environmental": 1.5, # Buz, kir, vs.
}
def compute_rss_uncertainty(sources=None):
if sources is None:
sources = DEFAULT_UNCERTAINTY_SOURCES
return math.sqrt(sum(s**2 for s in sources.values()))
# sqrt(16 + 9 + 9 + 4 + 2.25 + 1 + 4 + 2.25) = sqrt(47.5) ≈ 6.89%
Exceedance values are calculated with normally distributed z-scores:
Z_75 = 0.674 # Normal dağılımın %75 kantili
Z_90 = 1.282 # Normal dağılımın %90 kantili
Z_99 = 2.326 # Normal dağılımın %99 kantili
def compute_exceedance_values(p50_gwh, uncertainty_percent):
sigma_frac = uncertainty_percent / 100.0
return {
"P50": p50_gwh,
"P75": p50_gwh * (1.0 - Z_75 * sigma_frac), # 1930 × (1-0.674×0.0689) ≈ 1840
"P90": p50_gwh * (1.0 - Z_90 * sigma_frac), # 1930 × (1-1.282×0.0689) ≈ 1759
"P99": p50_gwh * (1.0 - Z_99 * sigma_frac), # 1930 × (1-2.326×0.0689) ≈ 1621
}
P-value formula: P_xx = P50 × (1 - z_xx × σ/100). Higher P-value → more conservative estimate → lower energy → higher confidence. The difference between P50 and P90 is the project's "margin of uncertainty" — the smaller this margin, the higher investor confidence.
The full cascade function combines all steps and adds the capacity factor and revenue calculation:
# Kapasite faktörü: CF = net_AEP / (P_rated × 8760 × n_turbines)
theoretical_gwh = rated_power_kw * 1e-6 * 8760.0 * num_turbines # 15MW × 8760h × 34 = 4,467.6 GWh
cf = net_aep / theoretical_gwh # ~1930 / 4467.6 ≈ 0.432
# Gelir: AEP_GWh × 1000 MWh/GWh × price_EUR/MWh / 1e6
revenue = net_aep * 1000.0 * price_eur_mwh / 1e6 # 1930 × 1000 × 72 / 1e6 ≈ 139 M€
Basic Concept¶
Basic Concept: P90 and Banking (P90 Exceedance & Bankability)
Simple explanation: Before a bank gives you a loan, it asks: "Will you be able to pay your debt even in the worst case scenario?" P90 means "You will produce at least this much electricity in 9 out of 10 years." The bank sizes your debt to P90 — so you can pay even in a bad wind year.
Similarity: If you know that your salary is at least 5000 TL with 90% reliability, the bank can give you a monthly installment loan of 4000 TL. But if your salary sometimes drops to 3000 TL, the bank will reduce the installments to 2500 TL. P50 is your average salary, P90 is your “outside the worst 10%” salary.
In this project: P50 ≈ 1930 GWh (most likely), P90 ≈ 1759 GWh (banking estimate). Difference ~171 GWh ≈ €12.3M/yr. This shows that reducing uncertainty (better measurement, longer data duration) creates direct financial value.
Links¶
Where will these concepts continue:
- Layout optimization (Part 1-2) → Cost-benefit analysis of different layouts will be made in P1 LCOE calculation
- Blocking model (Part 3) → The uncertainty of the blocking coefficient (α) will be examined in the P1 sensitivity analysis
- AEP cascade (Part 4) → In P1 LCOE net AEP will be input to direct cost calculation
- P90 value (Part 5) → P4 The relationship between forecast uncertainty and P-values will be modeled in AI Forecasting
- Capacity factor (Part 5) → In P2 HV Grid, grid connection capacity and cable sizing will be done according to CF
Link to Lesson 005: In this lesson we used the functions run_wake_analysis and get_v236_ct_curve (Lesson 005, Chapters 4-5) directly — the trace model is integrated into the pipeline as both the target function of the layout optimization and the Ct input of the blocking model.
The Big Picture¶
Focus of this lesson: Optimizing turbine placement, calculating the blocking effect and completing the gross-net AEP cascade.**
graph TB
subgraph "P1 — Wind Resource & AEP Pipeline"
ERA5["ERA5 Reanaliz<br/>(Ders 004)"]
WR["Rüzgar Gülü<br/>(Ders 005)"]
WM["İz Modeli<br/>BPA Gaussian<br/>(Ders 005)"]
LO["🆕 Yerleşim<br/>Optimizasyonu<br/>Regular / Staggered / DE"]
BK["🆕 Blokaj Modeli<br/>Nygaard (2020)<br/>~2.8% kayıp"]
AEP["🆕 AEP Kaskadı<br/>Çarpımsal kayıplar"]
PV["🆕 P50/P75/P90<br/>RSS belirsizlik<br/>Gelir hesabı"]
end
ERA5 --> WR
WR -->|"sektör Weibull"| WM
LO -->|"x, y pozisyonları"| WM
LO -->|"x, y pozisyonları"| BK
WM -->|"brüt AEP, iz kaybı"| AEP
BK -->|"blokaj kaybı %"| AEP
AEP -->|"net AEP (P50)"| PV
style LO fill:#1a5276,stroke:#2980b9,color:#ecf0f1
style BK fill:#1a5276,stroke:#2980b9,color:#ecf0f1
style AEP fill:#1a5276,stroke:#2980b9,color:#ecf0f1
style PV fill:#0e6655,stroke:#1abc9c,color:#ecf0f1
For full system architecture, see Lessons Overview.
Key Takeaways¶
- Turbine placement directly determines the energy production of the wind farm — the staggered grid reduces wake losses by 3% compared to the regular grid, providing additional revenue of ~€4.5M per year.
- Differential evolution solves the 68-dimensional non-convex optimization problem without gradients — preserving the 5D minimum range constraint with Sobol initialization and quadratic penalty function.
- Wind farm blockage is a collective effect distinct from individual turbine induction—the Nygaard (2020) model estimates a ~2.8% loss with the formula
α × density × Ct. - The gross-net AEP cascade must necessarily be multiplicative — the additive approach double-counts losses and creates an error of ~29 GWh/year.
- RSS uncertainty pooling (√47.5 ≈ 6.89%) realistically gives the total uncertainty of individual sources—the direct sum (18%) is overly conservative.
- P90 = P50 × (1 - 1.282 × σ/100) is the banking standard — the ~171 GWh difference between P50 and P90 implies ~€12.3M revenue uncertainty per year.
- 71 new unit tests (total suite: 156) verify each module's physical constraints, edge cases, and target values — providing regression protection.
Recommended Reading¶
Learning Roadmap — Phase 1: Section 1.2 Wake Modeling & Layout Optimization, Section 1.3 Energy Yield & Financial Analysis
| Source | Type | Why Read |
|---|---|---|
| Nygaard, N.G. et al. (2020), J. Phys.: Conf. Ser., 1618 | academic article | Original article of the blocking model we use in this lesson — Justification for the α=2.5 calibration |
| BVG Associates — Guide to an Offshore Wind Farm (2024) | Industry report | Industry reference for gross-net AEP cascade and P50/P90 calculation |
| TOPFARM Documentation (DTU) | Software documentation | Layout optimization framework — differential evolution application |
| DNV-RP-0003 — Energy Production Assessments | Technology standard | Loss categories, RSS uncertainty and exceedance calculation methodology |
| IRENA — Renewable Power Generation Costs 2024 | Report | Financial impact of AEP in the context of LCOE and revenue calculation |
Quiz — Test Your Understanding¶
Recall Questions¶
Q1: What is the main geometric difference between regular grid and staggered grid? How to shift odd numbered rows in staggered grid?
Reply
In a regular grid, all rows are aligned to the same x-coordinates — a flat rectangular grid. In the staggered grid, odd numbered rows (row % 2 == 1) are shifted in the x-direction by half a column distance (0.5 × dx = 590 m). This offset allows the trailing row turbines to escape the wake center of the front turbines, reducing wake losses by ~20-30%.Q2: What are the three inputs to the Nygaard (2020) blocking model and what is the source from which the α coefficient is calibrated?
Reply
Three inputs: (1) array density (total rotor area / farm footprint area), (2) mean thrust coefficient (mean Ct, hub height at wind speed), (3) empirical coefficient α = 2.5. Formula: `blockage = α × density × Ct × 100%`. The coefficient α is calibrated to LES (Large Eddy Simulation) results and is consistent with DNV recommendations.Q3: What is the RSS combined uncertainty percentage with the default uncertainty sources? What is the largest single resource?
Reply
RSS = √(4² + 3² + 3² + 2² + 1.5² + 1² + 2² + 1.5²) = √47.5 ≈ 6.89%. The largest single source is wind resource uncertainty (wind_resource) at 4.0%. This is due to factors such as measurement equipment calibration, measurement time and spatial representativeness.Comprehension Questions¶
Q4: Explain with a numerical example the difference between applying losses multiplicatively instead of additively. Assume a gross AEP of 2340 GWh and a total loss of 18.7%.
Reply
Additive approach: `2340 × (1 - 0.187) = 2340 × 0.813 = 1902.4 GWh`. Multiplicative approach (8.7%, 2%, 2%, 5%, 1% respectively): `2340 × 0.913 × 0.98 × 0.98 × 0.95 × 0.99 = 1929.6 GWh`. Difference: ~27 GWh. The additive approach is flawed because it applies each loss on the original gross value — whereas in reality the electrical loss is calculated on the remaining energy after trace and blockage. The additive approach "double-counts" losses and understates the AEP.Q5: Why were polish=False and init="sobol" selected in the optimize_layout function? Explain the technical rationale for each.
Reply
`init="sobol"`: Sobol quasi-random sequence spreads the initial population more homogeneously over the search space than a pseudo-random distribution. This minimizes "clustering" and "gap" problems that can occur at random startup — resulting in better solution quality in fewer iterations. `polish=False`: The default behavior tries to improve the DE result with the L-BFGS-B gradient method. However, our objective function is discontinuous at the feasible/infeasible limit due to the quadratic penalty function — the gradient method may behave erratically at this discontinuity and converge to a "better" solution that violates the constraint.Q6: How does the difference between P50 and P90 affect the financing of a wind farm project? Give a numerical example for our 510 MW farm.
Reply
P50 represents the median (most likely) annual production of the project (~1930 GWh). P90 represents the production to be exceeded with 90% confidence (~1759 GWh). Difference: ~171 GWh × €72/MWh = ~€12.3M/year. Banks use P90 in loan sizing because they want to make sure the investor can meet debt service outside the worst 10% of years. If the project is financed at P50, revenue will fall short of expectations in 50% of the years — this increases the default risk to an unacceptable level. Consequently, reducing uncertainty (longer measurement, better calibration) narrows the P50-P90 margin and increases the borrowing capacity of the project.Challenge Question¶
S7: In our 34 turbine farm, differential evolution optimization reduced wake losses from 12.7% (regular) to 8.7% (optimized). However, the blocking effect was not taken into account during optimization (the target function only uses run_wake_analysis). How does this deficiency affect the solution? What would change if we included blocking in the target function?
Reply
The current optimization only minimizes trace losses. The blockage effect depends on the array density — which in turn depends on the convex hull area of the turbines. Minimizing wake loss generally moves the turbines further apart (wider spread), which increases the convex hull area and reduces the array density—thus indirectly reducing blockage. So the current approach does not optimize “downside” in terms of blocking. However, if we also included blocking in the target function, the optimizer would avoid more compact layouts (high blocking). This would make a difference, especially in limit cases (scenarios where the optimizer jams the turbines). The practical effect is probably small (0.1-0.3% AEP difference) because the blockage is already small compared to trace loss. A more significant improvement would be to add turbulent fatigue loading to the target function — this affects the lifetime maintenance cost and enables the transition to LCOE optimization. Advanced approach: with multi-objective optimization (Pareto front), AEP maximization + blockage minimization + fatigue minimization can be done simultaneously — but this significantly increases the computational cost.Interview Corner¶
Simple Explanation¶
"How would you explain layout optimization and the AEP cascade to a non-engineer?"
Let's say you will plant 34 trees in a garden and each tree needs sun. If you plant trees in straight rows, the trees in the back will be in the shadow of the ones in front and will produce less fruit. If you plant trees in a "zigzag" pattern, each tree will receive more sun. But finding the best layout is complicated — the computer tries thousands of different layouts to find the one that produces the most fruit.
Think about it this way: the wind is blowing around the garden, and the trees collectively slow the wind down a bit — we call this “blocking.” Then, when calculating your orchard's total fruit production, you subtract shade loss, blockage loss, rotting fruit, and harvest losses, respectively. The important thing is to remove them "in a chain": each loss is calculated on the amount remaining from the previous one. Finally, you need to tell the bank “even in the worst case scenario, I will produce this much fruit” — this is the P90 value. The bank sizes your debt based on this safe estimate.
Technical Description¶
"How would you explain placement optimization and the AEP cascade in a job interview?"
Turbine layout optimization is a 68-dimensional (34 turbines × 2 coordinates) non-convex search problem. Under the IEC 61400-1 compliant 5D minimum range constraint, we use scipy's differential_evolution solver to find positions that maximize net AEP. The initial population is created with the Sobol quasi-random sequence, and constraint violations are handled with the quadratic penalty function. We systematically increase the layout quality with the progression from regular grid (~12.7% trace loss) → staggered grid (~9.8%) → DE-optimized (~8.7%).
Blocking effect, Nygaard et al. (2020) is calculated with the empirical model: blockage = α × density × Ct, where α=2.5 is the LES-calibrated, density is the convex hull-based array density and Ct is the hub-height average thrust coefficient. We estimate ~2.8% blockage loss on our V236 array of 34 turbines. The gross-net AEP cascade is an IEC 61400-15 and DNV-RP-0003 compliant multiplicative loss chain: Net = Gross × (1-wake) × (1-blockage) × (1-elec) × (1-avail) × (1-env). Uncertainty sources are combined with RSS (√47.5 ≈ 6.89%), P50/P75/P90/P99 exceedance values are calculated with normal distribution z-scores. P50 ≈ 1930 GWh, P90 ≈ 1759 GWh for the optimized layout — this ~€12.3M/year difference shows the direct financial value of uncertainty reduction. All modules have been validated with 71 unit tests (156 total).