Imitation: Radar Waveform Design in a Spectrally Crowded Environment via Nonconvex Quadratic Optimization

On a whim, I chose to study Information and Communication Engineering, closest related keyword, Signal Processing, and Sister major, Electrical Engineering. I was paired with Cui GuoLong and his post-doc Xiang Xianyu. In order to learn more about radar filter design, I thought it would be a great idea to imitate one of his papers. He recommended, “Radar Waveform Design in a Spectrally Crowded Environment via Nonconvex Quadratic Optimization”. Prerequisites are engineering mathematics and good applied linear algebra chops, and convex optimization (into graduate level mathematics). In the original implemenation, Prof Cui uses MATLAB and the CVX library, and borrows code from some applied mathematics papers. I used the python interface to CVX, CVXpy, and the same MATLAB code. In the rest of this blog post, I’ll describe the waveform design problem and provide code and some diagnostic plots. Thanks to Prof Cui, his post-doc Xian, and the China Scholarship Council for the opportunity to take some advanced mathematics classes.

Full disclosure, there may be some gaps in this, I started writing this a year ago, and I implemented this \approxeq then don’t remember where I left off, but I’ll post all of the code below that can get you up to running algorithm 1. I’m just being transparent, there’s some gaps in my explaination, they run algorithm 1 on P_4 .

Specifically, I’m missing how the authors transformed the quadratic program (QP) into an enlarged quadratic program (EQP), and some other modifications that transform P_1 into P_4 . This is important theory, but this is a blog post and it’s getting long, so I’m going to put it out and move on.

Let \bf{v} + \alpha \bf{c} + \bf{\eta}, \bf{v} \in C^N be an N-d vector of observations, and $\alpha_T$ a noise parameter accounting for channel propogation and backscattering effects. Trimming out a lot of the technical details, we first compute the energy spectral density (ESD). Recall, the ESD is the magnitude of the fourier transform of observations over all time.

Aubrey et al defines the energy spectral density as:

R_l^K(m,l) =

m=l \rightarrow  R_l^K(m,l) = f_2^k - f_1^k

m \neq l \rightarrow  R_l^K(m,l) = \frac{e^{j 2 \pi f_2^k (m-l)} - e{j 2 \pi f_1^k (m - l)}}{j 2 \pi (m - l)}

for (m,l) \in {1,...,N}^2

def compute_RI(N, f_1):
R1 = np.zeros((N, N), dtype = complex)

for m in range(0, N):
for l in range(0, N):
if (m != l):
R1[m, l] = (np.exp(np.pi * f_1[1] * ((m + 1) - (l + 1)) * 1j) - \
np.exp(np.pi * f_1[0] * ((m + 1) - (l + 1)) * 1j)) / \
(2 * np.pi * ((m + 1) - (l + 1)) * 1j)
for n in range(0, N):
R1[n, n] = f_1[1] - f_1[0]
return R1

Next, I computed C_0 = \bf{c}_0 \bf{c}^{\dagger} where \bf{c} is a constant modulus waveform:

c_0(n)=\frac{1}{\sqrt{N}}e^{j 2 \pi K_s( \frac{n}{f_s}})^2

And c^{\dagger} is the transpose of c_0(n), using the notation of the paper. C_0 is then just the outer product of c_o(n) and itself, that is c_0 c^{\dagger}.

The next, the authors makes some adjustments in order to make the problem solvable, by neglecting a constraint, which turn the problem into an enlarged quadratic problem relaxed (EQPR). The optimization problem becomes:

max_C ~ tr(CR)\\ s.t. tr(\bf{C}) = 1 \\ tr(\bf{CR_1} \leq E_1 \\ tr(\bf{C C_0}) \geq  \delta_{\epsilon} \\ \bf{C} \succcurlyeq \bf{0}

And then we can use CVXpy, the MATLAB convex optimization library in MATLAB that has a python interface. They transform the problem so that they can use disciplined convex programming, a way of implementing a convex optimization program, so that a language parser can parse the program and find the proper convex optimization algorithm to use. Here’s the CVXpy code, it’s really simple.

def P4(R, R_I, C0, EI = .264, eps = 1.3):  ## eps = .264
    delta_eps = (1 - eps / 2.0) ** 2
    N = R.shape[0]
    C = cp.Variable([N, N], PSD = True)  ## to mean c c'
    objective = cp.Minimize(cp.trace(C * np.real(R)) + cp.trace(C * np.imag(R)))
    constraints = [cp.trace(C) == 1,
                  (cp.trace(C * cp.real(R_I)) + cp.trace(C * cp.imag(R_I))) <= EI,
#                     (cp.trace(C * cp.real(R_I))) <= EI,
#                    (cp.trace(C * cp.real(C0)) + cp.trace(C * cp.imag(C0))) >= delta_eps,
                   C >> 0]
    problem = cp.Problem(objective, constraints)
    problem.solve()
    return [C, problem]

Next the authors do some performance analysis, which is like model validation but for radar engineering. I.E. does what I created actually work? Engineers usually always do this.

In order to simulate the, “spectrally crowded” environment, the authors construct an matrix, additive, that is composed of:

white interference:

\sigma_0

where I the identity, \sigma_0 = 0 dB the thermal noise level.

unlicensed narrow band continuous jammers:

\sum_{k=1}^{K_j} \sigma_{J, k} \textbf{R}_{J,k}

where K_j =2 is the number of active and unlicensed jammers,

\sigma_j,K the energy of the kth active jammer, for K_{j,K}=1,2, we set \sigma_j,1=40dB and \sigma_j,2 = 50 dB

R_{j,k} = r_{j,k} r_{j,k}^{\dagger} the normalized disturbanced covariance matrix for the k_{th} active unlicensed jammer

r(n)_{j,K}= e^{j 2 \pi f_{J,k} n/f_s} where f_{j,K} is the doppler shift of the k_{th} jammer, and set f_{j,1} / f_s = 0.7, f_{j,2} / f_s = 0.75

\textbf{licensed coexisting telecommunication networks, spectrally overlayed to the radar of interest}:

\sum_{k=1}^K \frac{\sigma{I,k}}{\Delta f_k} R_I^k

for K = 7 number of licensed radiators

signma_{I,k} the energy of the k_{th} coxesting telecommunication operating on normalized frequency band [f_2^k, f_1^k], for latex \sigma_{I,k} = 10$ dB, $k = 1,…K$

and $R_{I,k}$ as defined above

total addive spectral model: in this order, additive interference, licensed telecommunication newtorks, and unlicesened jammers is then

M = \sigma_0 I + \sum_{k=1}^K \frac{\sigma{I,k}}{\Delta f_k} R_I^k + \sum_{k=1}^{K_j} \sigma_{J, k} \textbf{R}_{J,k}

The the other licensed telecommunication networks are spectrally overlayed, and can be accounted for in the model. The authors then consider baseband equivalent radar stop-bands denoted as \omega.

We can then implement the additive model in python.

r_1_k = np.zeros(N, dtype = complex)
r_2_k = np.zeros(N, dtype = complex)
for i in range(0, N):
    r_1_k[i] = np.exp(2 * np.pi * .7 * (i + 1) * 1j)
    r_2_k[i] = np.exp(1j * 2 * np.pi * .75 * (i + 1))
R_J_1 = np.outer(r_1_k, np.asmatrix(r_1_k).H)
R_J_2 = np.outer(r_2_k, np.asmatrix(r_2_k).H)
R_J = [R_J_1, R_J_2]

omega_1 = [0.0000, 0.0617]
omega_2 = [0.0988, 0.2469]
omega_3 = [0.2593, 0.2840]
omega_4 = [0.3086, 0.3827]
omega_5 = [0.4074, 0.4938]
omega_6 = [0.5185, 0.5556]
omega_7 = [0.9383, 1.0000]
omega = [omega_1, omega_2, omega_3, omega_4,
         omega_5, omega_6, omega_7] 

R_I_1 = compute_RI(N, [0.0000, 0.0617])
R_I_2 = compute_RI(N, [0.0988, 0.2469])
R_I_3 = compute_RI(N, [0.2593, 0.2840])
R_I_4 = compute_RI(N, [0.3086, 0.3827])
R_I_5 = compute_RI(N, [0.4074, 0.4938])
R_I_6 = compute_RI(N, [0.5185, 0.5556])
R_I_7 = compute_RI(N, [0.9383, 1.0000])
R_I = [R_I_1, R_I_2, R_I_3, R_I_4,
       R_I_5, R_I_6, R_I_7]

sigma_J = [50, 40]

M = np.zeros((N, N), dtype = complex)
R_I_k = np.zeros((N, N), dtype = complex)
for k in range(0, K):
    M += 10 / (omega[k][1] - omega[k][0]) * R_I[k]
    R_I_k += R_I[k]    
R_I_k = np.matrix(R_I_k)

for k in range(0, K_j):
    M += sigma_J[k] * R_J[k]
    
M = np.asmatrix(M)
I = np.eye(N, N) 

We can then apply algorithm one. In the paper, they do additional relaxations, but I’m just getting the essence of the paper.

Inputs: \textbf{c}_0, \textbf{R} ,\textbf{R}_I ,E_I, \delta_{\epsilon}

Output: Optimal solution \textbf{c}^* to P_1

  1. Solve SDP P_4 finding an optimal solution \bar{\textbf{C}} and the optimal value v^*.
  2. if Rank(\bar{\textbf{C}})=1 then
  3. set \bar{\textbf{c}} =  \sqrt{\lambda_{max} ( \bar{\textbf{C}})}v, v the eigen vector associated with the maximum eigenvalue of \bar{\textbf{C}}:
  4. else if: Rank(\bar{\textbf{C}} = 2), then
  5. find \bar{\textbf{c}} = D_2(\bar{\textbf{C}},  \textbf{R}, \textbf{I},  \textbf{R}_I, \textbf{C}_0)
  6. else
  7. find \bar{\textbf{c}} = D_1(\bar{\textbf{C}},  \textbf{R}, \textbf{I},  \textbf{R}_I, \textbf{C}_0)
  8. end
  9. output \textbf{c}^* = \bar{\textbf{c}} e^{j arg(c^\dagger c_0)}

Proof from the Appendix, a Bit on Convex Optimization and Mathematical Theory

That’s the implementation, but note that most of the work that was done was in the appendix. I’ll walk through a few off the proofs, and provide some justification for each step. Most of the work, I suppose, for this paper was the theoretical work. I was first introduced to formal logic and proof writing in a class called transitions at Michigan State University by Dr. Hyejin Kim. We were taught direct proofs, proof by contradiction, induction, on mathematics we were already familiar with by prerequisite, so the focus was on understanding proof writing strategies. This was the first step we took at walking through direct proofs. It becomes overkill when you’re more comfortable with the branch of mathematics, you’re dealing with, but I’ll do it here as an exercise, both for myself and those interested. We took proofs, and each step, in a direct proof, we’d right out the theorem or the reason why this step was valid.

When solving a convex optimization problem, we have an objective or cost function, f(\cdot): \textbf{R}^n \rightarrow \textbf{R} , which we’d like to either maximize or minimize. The objective function will be subject to constraints, that are either equality, or hard or soft inequality constraints which mean strict (<, >) or not strict (\leq, \geq).

The first thing we’d like to is see if there exists at least one point intersection of the set of constraints, meaning that it is not null. Let’s write this out more rigorously. From Aubrey, et al, they’re placing constraints on the interference energy, E_I , the similarity parameter, \epsilon and a unit energy reference code \textbf{c}_0 .

In appendix A, for problem P_1, we need such that \textbf{c}^\dagger \textbf{R}_I \textbf{c} \leq E_I, E_I being the interference energy. Earlier in the paper, we defined ||c|| = 1 . From linear algebra, we know that the eigenvalues are constant multipliers to an to orthogonal vectors of the matrix, that are then ordered, the dimension with the highest magnitude being first. Since magnitude of \textbf{c} is 1…

While writing this, I thought deeper about the authors’ claims. The author claims, for feasibility of \textbf{c}^\dagger \textbf{R}_I \textbf{c} \leq E_I, we need E_I \geq \lambda_{min} R_I . But something irked me about this. So I did some thinking, and I ran it by a more serious mathematician to verify, and I was correct..

Here’s a sketch of a proof, since this is a blog.

I set \textbf{c} = {1,0,0,0,...}, since when spectrally decomposing the matrix, this will just give the largest eigenvalue. This contradicts the authors claim, because with claim above, there could be values of E_I that are less than \lambda_{max} \textbf{R}_I .

In proof by contradiction, we can disprove any statement by providing a single counterexample. i.e.:

Suppose we have a statement \textbf{A}, and we’d like to prove or disprove. If you find any single counter example to \textbf{A}, then the statement is not true. So some advice when proof writing, if something says “prove or disprove,” the first thing I do is try to disprove it. If you find a counter example, you are done. Otherwise, this exercise will help you get started exploring the problem at bit more.

I’m going to wrap this one up. We’re getting long. We’ve learned how to tear a paper a part, imitate implementations, some telecom theory, some applied math and linear algebra, some signal processing, some linear algebra, and some formal logic.

As usual, any comments, feedback or critique is appreciated.

Thank you to the Chilean mathematician on instagram, who has the handle @elrincondegauss, who verified by intuition was true and provided a more rigorous explaination.

And per usual, here’s all the code I used. I skipped some parts of the paper to be brief in the description above. I captured the essence.

#!/usr/bin/env python
# coding: utf-8
# Radar Waveform Design in a Spectrally Crowded Environment
import cvxpy as cp
import numpy as np
from numpy import linalg as LA
import matplotlib.pyplot as plt
from oct2py import Oct2Py
from scipy import signal

oc = Oct2Py()
# In[2]:


def compute_RI(N, f_1):
    R1 = np.zeros((N, N), dtype = complex)
    
    for m in range(0, N):
        for l in range(0, N):
            if (m != l):
                 R1[m, l] = (np.exp(np.pi * f_1[1] * ((m + 1) - (l + 1)) * 1j) - \
                    np.exp(np.pi * f_1[0] * ((m + 1) - (l + 1)) * 1j)) / \
                    (2 * np.pi * ((m + 1) - (l + 1)) * 1j)
    for n in range(0, N):
        R1[n, n] = f_1[1] - f_1[0]
    return R1


# In[5]:


def compute_C0(N):
    K_s = (750 * 10^3) / (200 * 10^-6)
    f_s = 200.0
    c0 = np.array([], dtype = complex);
    for n in range(0, N):
        c0 = np.append(c0, 1 / np.sqrt(N) * np.exp(2j * np.pi * K_s * pow(n / f_s, 2)))
    c0 = np.asmatrix(c0)
    C0 = np.outer(c0, c0.H)
    return [np.asmatrix(C0), c0]


# In[6]:


def P4(R, R_I, C0, EI = .264, eps = 1.3):  ## eps = .264
    delta_eps = (1 - eps / 2.0) ** 2
    N = R.shape[0]
    C = cp.Variable([N, N], PSD = True)  ## to mean c c'
    objective = cp.Minimize(cp.trace(C * np.real(R)) + cp.trace(C * np.imag(R)))
    constraints = [cp.trace(C) == 1,
                  (cp.trace(C * cp.real(R_I)) + cp.trace(C * cp.imag(R_I))) <= EI,
#                     (cp.trace(C * cp.real(R_I))) <= EI,
#                    (cp.trace(C * cp.real(C0)) + cp.trace(C * cp.imag(C0))) >= delta_eps,
                   C >> 0]
    problem = cp.Problem(objective, constraints)
    problem.solve()
    return [C, problem]


# In[7]:


def mag2db(x):
    return 20 * np.log(np.sqrt(np.real(x) ** 2 + np.imag(x) ** 2))


# In[8]:


## performance analysis

N = 50
sigma_0 = 0 ## thermal noise level
K = 7 ## number of licensed radiators
sigma_ik = 10 ## energy of the kth coexisting telecommunication network
K_j = 2  ## number fo active jammers
sigma_jk = np.array([50, 40])
f_1 = np.array([0.0, .15])
f_2 = np.array([0.98, 0.99])
delta_f_1 = f_1[1] - f_1[0]
delta_f_2 = f_2[1] - f_2[0]

### use R_I_k as above
r_1_k = np.zeros(N, dtype = complex)
r_2_k = np.zeros(N, dtype = complex)
for i in range(0, N):
    r_1_k[i] = np.exp(2 * np.pi * .7 * (i + 1) * 1j)
    r_2_k[i] = np.exp(1j * 2 * np.pi * .75 * (i + 1))
R_J_1 = np.outer(r_1_k, np.asmatrix(r_1_k).H)
R_J_2 = np.outer(r_2_k, np.asmatrix(r_2_k).H)
R_J = [R_J_1, R_J_2]

omega_1 = [0.0000, 0.0617]
omega_2 = [0.0988, 0.2469]
omega_3 = [0.2593, 0.2840]
omega_4 = [0.3086, 0.3827]
omega_5 = [0.4074, 0.4938]
omega_6 = [0.5185, 0.5556]
omega_7 = [0.9383, 1.0000]
omega = [omega_1, omega_2, omega_3, omega_4,
         omega_5, omega_6, omega_7]

R_I_1 = compute_RI(N, [0.0000, 0.0617])
R_I_2 = compute_RI(N, [0.0988, 0.2469])
R_I_3 = compute_RI(N, [0.2593, 0.2840])
R_I_4 = compute_RI(N, [0.3086, 0.3827])
R_I_5 = compute_RI(N, [0.4074, 0.4938])
R_I_6 = compute_RI(N, [0.5185, 0.5556])
R_I_7 = compute_RI(N, [0.9383, 1.0000])
R_I = [R_I_1, R_I_2, R_I_3, R_I_4,
       R_I_5, R_I_6, R_I_7]

sigma_J = [50, 40]

M = np.zeros((N, N), dtype = complex)
R_I_k = np.zeros((N, N), dtype = complex)
for k in range(0, K):
    M += 10 / (omega[k][1] - omega[k][0]) * R_I[k]
    R_I_k += R_I[k]    
R_I_k = np.matrix(R_I_k)

for k in range(0, K_j):
    M += sigma_J[k] * R_J[k]
    
M = np.asmatrix(M)
I = np.eye(N, N)


# In[9]:


## algorithm 1
## 1
M = R_I_k
[C0,c0] = compute_C0(N)
P4_out = P4(M, R_I_k, C0)
C_bar = np.asmatrix(P4_out[0].value)
C_bar = C_bar + 1e-3 * np.eye(N,N)  ## force to PSD
v_star = P4_out[1].value
rank_C_bar = np.rank(C_bar)
if rank_C_bar == 1:
    c_bar = np.sqrt(max(LA.eigvalsh(C_bar)))
    max_eig_index = np.where(LA.eigvalsh(C_bar) == max(LA.eigvalsh(C_bar)))
    w, v_arr = LA.eig(C_bar)
    v = v_arr[:,9];
elif rank_C_bar == 2:
    c_bar = oc.fcndcmps14(M, I, R_I_k, C0, C_bar)
else:
    c_bar = oc.fcndcmps14(M, I, R_I_k, C0, C_bar)
c_star = c_bar * np.exp(1j * np.angle(np.dot(np.asmatrix(c_bar).H.shape, c0.H.shape)))



# ## Make plots

# In[10]:


plt.acorr(np.abs(c_star[:,0]) ** 2, maxlags = None,
          usevlines = False, linestyle = '-', marker = '')
plt.show()


# In[11]:


fs = 10e3
N = 1e5
amp = 2*np.sqrt(2)
freq = 1234.0
noise_power = 0.001 * fs / 2
time = np.arange(N) / fs
x = amp*np.sin(2*np.pi*freq*time)
x += np.random.normal(scale=np.sqrt(noise_power), size=time.shape)

f, Pxx_den = signal.welch(x, fs, nperseg=1024)
plt.semilogy(f, Pxx_den)
plt.ylim([0.5e-3, 1])
plt.xlabel('frequency [Hz]')
plt.ylabel('PSD [V**2/Hz]')
plt.show()


# In[1]:


get_ipython().system('jupyter nbconvert --to script mycode.ipynb')
with open('spectrally_crowded.py', 'r') as f:
    lines = f.readlines()
with open('spectrally_crowded.py', 'w') as f:
    for line in lines:
        if 'nbconvert --to script' in line:
            break
        else:
            f.write(line)
%% Matlab Code for rank-one decomposition - by Yongwei Huang and Shuzhong Zhang
%% April 2007

%% Plus the additional functions: solneqnsys.m
%%                                solneqn3.m
%%                                solneqnsys3.m
%%                                fcndcmps12.m

%% input: A (real symmetric, Hermitian)
%%        B (real symmetric, Hermitian)
%%        C (real symmetric, Hermitian)
%%        X (psd, complex Hermitian symmetric, and could be real symmetric)

%% output: Xs: each column of Xs is the decomposed vector;
%%          r: rank of X;
%%       rdel= trace(AX)/r
%%       rdelb=trace(BX)/r
%%       redlc=trace(CX)/r
%%       clas1=Xs(:,r-1)'*C*Xs(:,r-1)
%%       clas2=Xs(:,r)'*C*Xs(:,r)

%% Given problem size: n
%% Methods to randomly generate A:
    %% construct A (Hermitian):

    %% method 1:
    %A1=randn(n); 
    %A2=randn(n); 
    %A=(A1+i*A2+(A1+i*A2)')/2;
    
    %% method 2:
    %A1=randn(n);
    %UA1=triu(A1); %get A1's upper triangle matrix including diagonal elements
    %LA1=tril(A1);
    %A=UA1+UA1'-diag(diag(UA1))+i*(LA1-LA1');
    
    %% construct A (real symmetric)
    %A=real(A);
    %or
    %A=(A1+A1')/2;
    
    %% construct integer-entry A
    % A=round(10*A) % or A=fix(10*A);
    
%% construct B
    %% method 1:
    %B1=randn(n); 
    %B2=randn(n); 
    %B=(B1+i*B2+(B1+i*B2)')/2;
    
    %% method 2:
    %B1=randn(n);
    %UB1=triu(B1); % get B1's upper triangle matrix including diagonal elements
    %LB1=tril(B1);
    %B=UB1+UB1'-diag(diag(UB1))+i*(LB1-LB1');
    
    %% construct B (real symmetric)
    %B=real(B);
    %or
    %B=(B1+B1')/2;
    
    %% construct integer-entry B
    % B=round(10*B) % or B=fix(10*B);
    
%% construct C
    %% method 1:
    %C1=randn(n); 
    %C2=randn(n); 
    %C=(C1+i*C2+(C1+i*C2)')/2;
    
    %% method 2:
    %C1=randn(n);
    %UC1=triu(C1); % get C1's upper triangle matrix including diagonal elements
    %LC1=tril(C1);
    %C=UC1+UC1'-diag(diag(UC1))+i*(LC1-LC1');
    
    %% construct C (real symmetric)
    %C=real(C);
    %or
    %C=(C1+C1')/2;
    
    %% construct integer-entry C
    % C=round(10*C) % or C=fix(10*C);

%% construct X: method 1
%X1=randn(n);X2=randn(n);
%X=(X1+i*X2)*(X1+i*X2)';
% construct X (psd, real symmetric)
%X=real(X);

%% construct X: method 2
%rr=n-1; % rank of X, this parameter is only for the use of generating X, controlling the rank of X.
%X=zeros(n);
%for ii=1:rr
%    rdv=randn(n,1);
%    idv=randn(n,1);
%    X=X+(rdv+i*idv)*(rdv+i*idv)';
%end

%% construct integer-entry X:
   %rr=n-1; 
   %X=zeros(n);
   %for ii=1:rr
   %    rdv=floor(randn(n,1));  % ceil, round, fix
   %    idv=floor(randn(n,1));
   %    X=X+(rdv+i*idv)*(rdv+i*idv)';
   %end

% construct X (psd, real, rank controlled)
%rr=n-1;
%X=zeros(n);
%for ii=1:rr
%    rdv=randn(n,1);
%   X=X+(rdv)*(rdv)';
%end


%% function starts...

function [Xs,r,rdel,rdelb,rdelc,clas1,clas2]=fcndcmps13(A,B,C,X)


epsi=10^(-8);

%% begin of checking inputs

% check size of inputs

%if isreal(X)
%    disp('X should be complex-valued matrix.');
%    return;
%end

%% when inputs A,B,C,X are real, it can works as well (outputting rank-2
%% decomposition of X);

[rowA,colA]=size(A);
[rowB,colB]=size(B);
[rowC,colC]=size(C);
[rowX,colX]=size(X);
if (rowA==colA)&&(rowB==colB)&&(rowC==colC)&&(rowX==colX)&&(rowX==rowA)&&(rowA==rowB)&&(rowB==rowC)
    n=rowA;
else
    error('A, B, C and X must be square matrices and the same size.');
    %disp('A, B, C and X must be square matrices and the same size.');
    %return;
end
%n=length(diag(X));
if n<=2
    error('Size must be >=3.');
    %disp('Size must be >=3.');
    %return;
end

% check Hermitian or symmetric of A, B, C and X 
for ii=1:n-1
    for jj=ii+1:n
        if (abs(real(A(ii,jj)-A(jj,ii)'))>10^(-10)) || (abs(imag(A(ii,jj)-A(jj,ii)'))>10^(-10)) ||(abs(real(B(ii,jj)-B(jj,ii)'))>10^(-10)) || (abs(imag(B(ii,jj)-B(jj,ii)'))>10^(-10)) || (abs(real(C(ii,jj)-C(jj,ii)'))>10^(-10)) || (abs(imag(C(ii,jj)-C(jj,ii)'))>10^(-10)) || (abs(real(X(ii,jj)-X(jj,ii)'))>10^(-10)) || (abs(imag(X(ii,jj)-X(jj,ii)'))>10^(-10)) 
            error('A, B, C and X must be Hermitian.');
            %disp('A, B, C and X must be Hermitian.');
            %return;
        end
    end
end

% check positive semidefinitness of X
eigX=eig(X);
for ii=1:n
    if eigX(ii)<=-10^(-10)
        error('X must be positive semidefinite.');
        %disp('X must be positive semidefinite.');
        %return;
    end
end

% rank of X
%r=rank(X);

r=0;
for ii=1:n
    if abs(eigX(ii))>=10^(-10)
        r=r+1;
    end
end

if r<=2
    error('The rank of X should be >= 3');
    %disp('The rank of X should be >= 3');
    %return;
end

%% end of checking input




%define Xs to save decomposed vectors
Xs=zeros(n);
%inner products 
%vA=vect(A);
%vB=vect(B);
%vC=vect(C);
%vX=vect(X);
%delta=real((vA)'*vX);
%% delta=trace(A'*X)=trace(X'*A)=trace(A*X')=trace(A*X)
delta=real(trace(A*X'));
rdel=delta/r;
%delta=real((vB)'*vX);
delta=real(trace(B*X'));
rdelb=delta/r;
%delta=real((vC)'*vX);
delta=real(trace(C*X'));
rdelc=delta/r;
clas1=0;
clas2=0;

%% if X is Hermitian, then please use function fcndcmps12.m

%if norm(imag(vect(X)))>=epsi % or norm(imag(X),'fro')>=epsi, if no vect.m
%    Xs=fcndcmps12(A,B,X);
%    return
%end

%% if B=0, please use function fcndcmps1.m

%if norm(vect(B))<=epsi  % or norm(B,'fro')<=epsi, if no vect.m
%    Xs=fcndcmps1(A,X);
%    return
%end


%% if rank of X <=2, done return solution infomation

%if (r<=2)
%    Xs=fcndcmps12(A,B,X);
%    disp('rank of X <=2, only return solution satisfying A,B, but not for C')
%    return
%end


rt=r; % rank test, rt from r to 2; rt>=3
Xs1=zeros(n);
Xu=X;
for ii=1:n
    Xs1=fcndcmps12(A,B,Xu); %% rank(Xs1)=rank(Xu)=r-rank(Xs)
    %% find v1 and v2 s.t. v1'*C*v1>rdelc, v2'*C*v2<rdelc, and
    %% v1'*A*v1=v2'*A*v2=rdel,v1'*B*v1=v2'*B*v2=rdelb
    v1=zeros(n,1);
    v2=zeros(n,1);
    for jj=1:n
        vv=Xs1(:,jj);
        if norm(vv)>=epsi
           deltemp=real(vv'*C*vv); %or real(vC'*vect(vv*vv'))   or real(trace(C*(vv*vv')))
           if (deltemp>rdelc)&&(abs(deltemp-rdelc)>=epsi)&&(norm(v1)==0)
               v1=vv;
               dltp1=deltemp-rdelc;
               jj1=jj;
           elseif (deltemp<rdelc)&&(abs(deltemp-rdelc)>=epsi)&&(norm(v2)==0)
               v2=vv;
               dltp2=deltemp-rdelc;
               jj2=jj;
           end
        else
            continue
        end
        if (norm(v1)>0)&&(norm(v2)>0)
            break; 
        end
    end
    
    if  (norm(v1)==0)&&(norm(v2)==0) %%could be norm(v1)==0 || norm(v2)==0
        break;    %% break 1, rarely happen; after the break, rt>=3, norm(v1)==0, norm(v2)==0, i.e., Xs already equalized with A,B,C.
    end
    %%end find v1 and v2
    
    %% find v3 from Xs1 s.t. v3~=0, v3~v1, v3~=v2; rank(Xs1)=rt >=3 at the
    %% moment
    v3=zeros(n,1);
    for jj=1:n
        if (jj~=jj1)&&(jj~=jj2)&&(norm(Xs1(:,jj))>epsi)
            v3=Xs1(:,jj);
            break;
        end
    end
    %% v3 found; if v3=0, the only case is rank(Xs1)=2, which will not
    %% happen here. 
    
    %% find a new x
    a=zeros(12,1);
    a(1)=v1'*A*v2;a(2)=v2'*A*v3;a(3)=v3'*A*v1;
    a(4)=v1'*B*v2;a(5)=v2'*B*v3;a(6)=v3'*B*v1;
    a(7)=dltp1;a(8)=dltp2;a(9)=2*v1'*C*v2;a(10)=2*v2'*C*v3;a(11)=2*v3'*C*v1;a(12)=real(v3'*C*v3-rdelc);
    s=solneqnsys3(a);
    newx=(s(1)*v1+s(2)*v2+s(3)*v3)/(norm(s));  %% s must be a non-zero soln.
    
    Xs(:,(r-rt+1))=newx;
    Xu=Xu-newx*newx';
    rt=rt-1;
    if rt==2
        break; %% break 2, must happen; after the break, rt==2, but norm(v1)>0, and norm(v2)>0.
    end
end

%continue break 1: rt>=3, rarely happen
if rt>=3
    disp('f13 Done-1'); %% no v1, v2 found, this case happens only when Xs1=0, or Xs1 equalized with C
        % at the moment the rank of Xs1 is rt>=3; rank(Xs1)=rank(Xu)=rt,
        % rank(Xs)=r-rt
    DeXs1=zeros(n,rt);
    jj=1;
    for ii=1:n
        if norm(Xs1(:,ii))>epsi
            DeXs1(:,jj)=Xs1(:,ii); % rank of Xs1 = rt
            jj=jj+1;
        end
     end
     Xs(:,(r-rt+1):r)=DeXs1;
end

%continue break 2: rt==2, must happen
if rt==2
    DeXs1=zeros(n,rt);
    Xs1=fcndcmps12(A,B,Xu);
    jj=1;
    for ii=1:n
        if norm(Xs1(:,ii))>epsi
            DeXs1(:,jj)=Xs1(:,ii); % rank of Xs1 = rt
            jj=jj+1;
        end
     end
     Xs(:,(r-rt+1):r)=DeXs1;
end

r;
rdel;
Xs(:,r-2)'*A*Xs(:,r-2);
Xs(:,r-1)'*A*Xs(:,r-1);
Xs(:,r)'*A*Xs(:,r);
rdelb;
Xs(:,r-2)'*B*Xs(:,r-2);
Xs(:,r-1)'*B*Xs(:,r-1);
Xs(:,r)'*B*Xs(:,r);
rdelc;
Xs(:,r-2)'*C*Xs(:,r-2);
clas1=real(Xs(:,r-1)'*C*Xs(:,r-1));
clas2=real(Xs(:,r)'*C*Xs(:,r));
%% This function is to solve the following equation: 
%%  imag(conj(s(1))*s(2))+real(a(1)*conj(s(2)))+real(a(2)*s(1))+a(3)=0.

%% Inputs(coefficients of the equation): a(1),a(2): complex or real number;
%%                                            a(3): real;
%% Output(solution): s(1),s(2).



function s=soln_1eqn_3n(a)

epsi=10^(-8);
a2=a(1);a3=a(2);a4=real(a(3));

s=zeros(2,1);
s1=s(1);s2=s(2);

%% check in input
if (length(a)~=3)||(abs(imag(a(3)))>=epsi)
    disp('coefficents assumption not satisfied - Stop 1 from soln_1eqn_3n')
    return
end
%% end of check in input

u2=abs(a2);xsi2=angle(a2);
u3=abs(a3);xsi3=angle(a3);

if abs(a4)<epsi % or abs(a4)==0
    s1=0;s2=0;
else
    if (u2<epsi)&&(u3<epsi) % or (u2==0)&&(u3==0)
        s1=1;s2=-a4*i;
    end
    if (u2>=epsi) % or u2~=0
        s1=0;s2=-a4/u2*exp(i*xsi2);
    end
    if (u3>=epsi)&&(u2<epsi) % or u3~=0 && u2==0
        s1=-a4/u3*exp(i*(-xsi3));s2=0;
    end
end
s(1)=s1;s(2)=s2;

% test: 
% ia=randn(3,1);ib=randn(3,1);a=complex(ia,ib);a(3)=real(a(3));
% s=soln_singleeqn_3n(a);
%eqn=imag(conj(s(1))*s(2))+real(a(1)*conj(s(2)))+real(a(2)*s(1))+a(3) %% is it equal to zero?     
%% this function is to solve the following two equations with 3 vars.
%%                  a1*s1*s2+a2*s2*s3+a3*s1*s3        =0
%% bm1*s1^2+b0*s2^2+b1*s1*s2+b2*s2*s3+b3*s1*s3+b4*s3^2=0,
%% where bm1>0 and b0<0;

%% input: a -- 9-dim vector (real number)

%% output: s1,s2,s3: a solution of the equation system
%%                   (as long as bm1*b0<0, there must be a NONZERO solution)


function s=solneqnsys(a)

epsi=10^(-8);
a1=a(1);a2=a(2);a3=a(3);
bm1=a(4);b0=a(5);b1=a(6);b2=a(7);b3=a(8);b4=a(9);

s=zeros(3,1);
s1=s(1);s2=s(2);s3=s(3);

if (length(a)~=9)||(bm1*b0>=0)
    disp('coefficents assumption not satisfied - from solneqnsys')
    return
end

if bm1<0
    bm1=-bm1;b0=-b0;b1=-b1;b2=-b2;b3=-b3;b4=-b4;
end

if abs(a1)<=epsi
    s3=0;s2=1;s1=(-b1+sqrt(b1^2-4*bm1*b0))/(2*bm1);
    return
else 
    s3=1;
end

if (abs(a2)<=epsi)&&(abs(a3)<=epsi)
    if b4<=0
        s2=0;s1=(-b3+sqrt(b3^2-4*bm1*b4))/(2*bm1);
    else
        s1=0;s2=(-b2+sqrt(b2^2-4*b0*b4))/(2*b0);
    end
elseif (abs(a2)>epsi)&&(abs(a3)<=epsi)
    if b4<=(b3^2)/(4*bm1)
        s2=0;s1=(-b3+sqrt(b3^2-4*bm1*b4))/(2*bm1);
    else
        s1=-a2/a1;
        c1=b2+b1*s1;
        c2=bm1*(s1)^2+b3*s1+b4;
        s2=(-c1+sqrt(c1^2-4*b0*c2))/(2*b0);
    end
elseif (abs(a2)<=epsi)&&(abs(a3)>epsi)
    if b4>=(b2^2)/(4*b0)
        s1=0;s2=(-b2+sqrt(b2^2-4*b0*b4))/(2*b0);
    else
        s2=-a3/a1;
        c1=b3+b1*s2;
        c2=b0*(s2)^2+b2*s2+b4;
        s1=(-c1+sqrt(c1^2-4*bm1*c2))/(2*bm1);
    end
elseif (abs(a2)>epsi)&&(abs(a3)>epsi)
    lam1=(a1^2)*b0;
    lam2=2*a1*a3*b0-a1*a2*b1+(a1^2)*b2;
    lam3=(a2^2)*bm1+(a3^2)*b0-a2*a3*b1+2*a1*a3*b2-a1*a2*b3+(a1^2)*b4;
    lam4=(a3^2)*b2-a2*a3*b3+2*a1*a3*b4;
    lam5=b4*a3^2;
    rs=roots([lam1,lam2,lam3,lam4,lam5]);
    for ii=1:length(rs)
        if (abs(imag(rs(ii)))<=epsi)&&(abs(a3+a1*real(rs(ii)))>epsi)
            s2=real(rs(ii));
            s1=-a2*s2/(a3+a1*s2);
            break;
        end
    end
end
        
s(1)=s1;s(2)=s2;s(3)=s3;
% Test:
% a=randn(9,1);a(4)=abs(a(4));a(5)=-abs(a(5));
% s=solneqnsys(a);
%eqn1=a(1)*s1*s2+a(2)*s2*s3+a(3)*s1*s3
%eqn2=a(4)*s1^2+a(5)*s2^2+a(6)*s1*s2+a(7)*s2*s3+a(8)*s1*s3+a(9)*s3^2

And then here’s Elrin’s explanation (I think that’s his name? I really don’t know). Instagram has disabled my account and I’ve lost contact. But this took him like, 4 minutes. I was impressed. It took me 10.

2 responses to “Imitation: Radar Waveform Design in a Spectrally Crowded Environment via Nonconvex Quadratic Optimization”

  1. I had to update the spelling on my adviser’s name. Not sure, I checked this, I’ll never forget his name. His post-doc I just called Xian.

    Like

  2. Wait, I’m actually wrong, the author is correct, he’s just kinda vague.

    Like

Leave a reply to likelyandre Cancel reply