Assignment 2

Shader Live Coding
Echo Pulse

IMGD/CS 4300  ·  D-Term 2026  ·  Performance recorded in wgsl_live

Performance Recording

Technical Approach

The shader runs entirely inside wgsl_live, a WebGPU fragment-shader live-coding environment. Every pixel is computed per-frame in a single @fragment function. The core visual engine is a frame-feedback loop: lastframe() samples the previous frame through a slowly rotating UV transform (rotate()), creating an accumulating echo-like trail that never quite repeats.

A Lissajous-drifting circle (sin/cos on the centre point, soft edge via smoothstep) acts as the seed shape, while a polar ripple field (length, atan2, nested sin/cos) generates the radiating wave texture underneath. Live audio drives the feedback weight via audio[0] (bass) and expands the circle radius via audio[2] (highs). pow + scalar decay prevent unbounded brightness accumulation, and a radial smoothstep vignette darkens the edges. Mouse position shifts the feedback UV offset in real time.

Aesthetic Intent

The piece is interested in memory as texture: each frame is not erased but rotated and folded back into itself, so the image accumulates its own history the way an echo chamber accumulates sound. The drifting circle is less a shape than a cursor — something that marks time by leaving a residue. The colour feels metabolic rather than designed: R/G/B channels drift at incommensurable frequencies so the palette is never the same twice, cycling through warm amber, cold teal, and brief flashes of magenta.

During the performance I treated the mouse as a physical instrument — small lateral movements shear the feedback field, breaking rotational symmetry and introducing a sense of breath or instability. Audio reactivity was kept subtle: the image should feel alive to sound without obviously pulsing on the beat.

Source Code

a2.wgsl  ·  wgsl_live fragment shader download ↓
// A2 Shader Live Coding — Echo Pulse
// IMGD/CS 4300, D-Term 2026  ·  Ctrl+Enter to reload

@fragment
fn fs( @builtin(position) pos : vec4f ) -> @location(0) vec4f {

  let t   = seconds();
  let uv0 = uvN( pos.xy );    // 0..1
  var p   = uv( pos.xy );      // -1..1

  var centre = vec2f( sin(t*1.1)*.28, cos(t*.85)*.28 );
  let d      = distance( p, centre );
  let radius = .09 + audio[2] * .06;
  let circle = 1.0 - smoothstep( radius, radius+.03, d );

  let r      = length( p );
  let angle  = atan2( p.y, p.x );
  let ripple = sin( r*14.0 - t*2.5 + cos(angle*3.0+t) )*.5+.5;

  let rot = rotate( uv0, t/7.0 + sin(t*.3)*.15 ) + mouse.xy*.06;
  let fb  = lastframe( clamp(rot, vec2f(0.), vec2f(1.)) );

  let bass = .42 + audio[0] * .65;
  let mid  = abs( sin(t*.6) ) * .15 + audio[1] * .1;

  var col  = fb * bass;
  col.r += ripple * (.10 + mid);
  col.g += ripple * abs( sin(t*.45) ) * .08;
  col.b += (1.0 - ripple) * .07 + mid * .5;

  let stamp = vec4f(
    mix(.3, .95, fract(t*.18)),
    mix(.6,  .2,  fract(t*.11)),
    mix(.8,  .4,  fract(t*.23)),
    1.0
  );
  col += vec4f(circle) * stamp;

  col  = pow( clamp(col, vec4f(0.), vec4f(1.)), vec4f(1.018) ) * .965;
  col *= smoothstep( 1.1, .25, r );

  return clamp( col, vec4f(0.), vec4f(1.) );
}

Functions Used (18 total — ≥12 required)

#FunctionSourceRole in shader
1uvN()wgsl_livepixel → 0..1 UV
2uv()wgsl_livepixel → -1..1 centred
3seconds()wgsl_liveelapsed time driver
4rotate()wgsl_livespin feedback UV each frame
5lastframe()wgsl_liveprevious frame → echo trail
6audio[]wgsl_livebass / mid / high frequency
7sin()WGSLcentre drift, ripple, colour tempo
8cos()WGSLcentre drift, nested ripple
9distance()WGSLcircle SDF
10smoothstep()WGSLsoft edge + vignette
11length()WGSLpolar radius
12atan2()WGSLpolar angle
13clamp()WGSLUV guard + colour clip
14abs()WGSLunsigned modulation
15mix()WGSLcolour interpolation
16fract()WGSLcyclic colour channels
17pow()WGSLgamma decay on feedback
18step()WGSLavailable for live edits

Peer Feedback & Reflection

[Placeholder — fill in after collecting peer feedback] A classmate noted ...

LLM Disclosure

LLM use: An AI assistant was used to scaffold the HTML/CSS of this page and suggest initial parameter ranges for feedback decay and audio weights. All WGSL shader logic, visual design, and performance choices are my own. No significant shader code was generated by an LLM.