Skip to content

Commit f61d145

Browse files
Marek Vasutvinodkoul
authored andcommitted
dmaengine: xilinx: xilinx_dma: Fix residue calculation for cyclic DMA
The cyclic DMA calculation is currently entirely broken and reports residue only for the first segment. The problem is twofold. First, when the first descriptor finishes, it is moved from active_list to done_list, but it is never returned back into the active_list. The xilinx_dma_tx_status() expects the descriptor to be in the active_list to report any meaningful residue information, which never happens after the first descriptor finishes. Fix this up in xilinx_dma_start_transfer() and if the descriptor is cyclic, lift it from done_list and place it back into active_list list. Second, the segment .status fields of the descriptor remain dirty. Once the DMA did one pass on the descriptor, the .status fields are populated with data by the DMA, but the .status fields are not cleared before reuse during the next cyclic DMA round. The xilinx_dma_get_residue() recognizes that as if the descriptor was complete and had 0 residue, which is bogus. Reinitialize the status field before placing the descriptor back into the active_list. Fixes: c0bba3a ("dmaengine: vdma: Add Support for Xilinx AXI Direct Memory Access Engine") Signed-off-by: Marek Vasut <[email protected]> Link: https://patch.msgid.link/[email protected] Signed-off-by: Vinod Koul <[email protected]>
1 parent e9cc953 commit f61d145

1 file changed

Lines changed: 22 additions & 1 deletion

File tree

drivers/dma/xilinx/xilinx_dma.c

Lines changed: 22 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1564,8 +1564,29 @@ static void xilinx_dma_start_transfer(struct xilinx_dma_chan *chan)
15641564
if (chan->err)
15651565
return;
15661566

1567-
if (list_empty(&chan->pending_list))
1567+
if (list_empty(&chan->pending_list)) {
1568+
if (chan->cyclic) {
1569+
struct xilinx_dma_tx_descriptor *desc;
1570+
struct list_head *entry;
1571+
1572+
desc = list_last_entry(&chan->done_list,
1573+
struct xilinx_dma_tx_descriptor, node);
1574+
list_for_each(entry, &desc->segments) {
1575+
struct xilinx_axidma_tx_segment *axidma_seg;
1576+
struct xilinx_axidma_desc_hw *axidma_hw;
1577+
axidma_seg = list_entry(entry,
1578+
struct xilinx_axidma_tx_segment,
1579+
node);
1580+
axidma_hw = &axidma_seg->hw;
1581+
axidma_hw->status = 0;
1582+
}
1583+
1584+
list_splice_tail_init(&chan->done_list, &chan->active_list);
1585+
chan->desc_pendingcount = 0;
1586+
chan->idle = false;
1587+
}
15681588
return;
1589+
}
15691590

15701591
if (!chan->idle)
15711592
return;

0 commit comments

Comments
 (0)